mirror of
https://github.com/zhayujie/chatgpt-on-wechat.git
synced 2026-05-07 03:32:18 +08:00
Compare commits
104 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 883f0d449b | |||
| f4c62e7844 | |||
| f0d212a9d2 | |||
| 76a8974034 | |||
| 0614e822f4 | |||
| 6f682c9a2e | |||
| a9fdbc31c5 | |||
| 086fdb5856 | |||
| 63c8ef4f17 | |||
| 736f6523c7 | |||
| 8b0b360d25 | |||
| 80b84e2ee6 | |||
| b5b7d86f7b | |||
| f20d704390 | |||
| e4e1e2e944 | |||
| 6bc7eeb4cc | |||
| 656ed5de7b | |||
| a11d695c78 | |||
| c4f9acd5c5 | |||
| 5ef929dc42 | |||
| c8cf27b544 | |||
| bb5ecfc398 | |||
| c91e7c35bb | |||
| 532d56df2d | |||
| 111ad44029 | |||
| 6b02bae957 | |||
| 6831743416 | |||
| 63e2f42636 | |||
| f6e6805453 | |||
| ad77ad8f2b | |||
| 469524e8ae | |||
| f4f55d5dfd | |||
| c248d0f3f4 | |||
| 648a04b513 | |||
| bdc86c16ec | |||
| 21efd17c17 | |||
| aaa75e7b62 | |||
| 6d0cef3152 | |||
| c18472289f | |||
| 02b7c70a81 | |||
| 4eaa2b93c6 | |||
| d347905373 | |||
| f495213b2c | |||
| 9b125913ae | |||
| da81f05804 | |||
| 9a371a4d4d | |||
| 1e92828f1a | |||
| 7e724b3fa3 | |||
| 3f5b976a87 | |||
| 49f2339cc2 | |||
| 29f1699de8 | |||
| c415485801 | |||
| 6937673472 | |||
| c4f10fe876 | |||
| 55ca652ad8 | |||
| 3effd5afd1 | |||
| 000c2029de | |||
| ab88e3af06 | |||
| b544a4c954 | |||
| baff5fafec | |||
| 1673de73ba | |||
| e68936e36e | |||
| 7dbd195e45 | |||
| 3dc22f98bf | |||
| 805e870c18 | |||
| de2c031797 | |||
| 3aa571aa1b | |||
| 3e4969efe6 | |||
| 446e94df76 | |||
| 5b26066a4c | |||
| 8a80de5c3f | |||
| 52a490c87e | |||
| 29490741fd | |||
| f0e416455f | |||
| f7a2c97943 | |||
| 993853757b | |||
| a3abfb987d | |||
| 2711fa1b1b | |||
| 1f7afaba07 | |||
| e02c8bff81 | |||
| 22391ba1a5 | |||
| a05781ec19 | |||
| f898ed6a2a | |||
| e6d0a15b54 | |||
| 49cff026e2 | |||
| 08f0023cfd | |||
| e311466ee6 | |||
| 56789e68d7 | |||
| 87525bb383 | |||
| bb2880191a | |||
| 4f1acf26d6 | |||
| fc2d6b21ac | |||
| b9e84fefbd | |||
| 91f5ffb2d9 | |||
| 70ff2341cb | |||
| 74eed93497 | |||
| d02e26c014 | |||
| 523cade7c3 | |||
| e22c183ca9 | |||
| 3afd99da30 | |||
| f44979f983 | |||
| 095f9cc108 | |||
| 1089076fce | |||
| 70344dd214 |
@@ -5,7 +5,7 @@
|
||||
最新版本支持的功能如下:
|
||||
|
||||
- ✅ **多端部署:** 有多种部署方式可选择且功能完备,目前已支持微信公众号、企业微信应用、飞书、钉钉等部署方式
|
||||
- ✅ **基础对话:** 私聊及群聊的消息智能回复,支持多轮会话上下文记忆,支持 GPT-3.5, GPT-4, GPT-4o, Claude-3, Gemini, 文心一言, 讯飞星火, 通义千问,ChatGLM-4,Kimi(月之暗面), MiniMax
|
||||
- ✅ **基础对话:** 私聊及群聊的消息智能回复,支持多轮会话上下文记忆,支持 GPT-3.5, GPT-4o-mini, GPT-4o, GPT-4, Claude-3.5, Gemini, 文心一言, 讯飞星火, 通义千问,ChatGLM-4,Kimi(月之暗面), MiniMax
|
||||
- ✅ **语音能力:** 可识别语音消息,通过文字或语音回复,支持 azure, baidu, google, openai(whisper/tts) 等多种语音模型
|
||||
- ✅ **图像能力:** 支持图片生成、图片识别、图生图(如照片修复),可选择 Dall-E-3, stable diffusion, replicate, midjourney, CogView-3, vision模型
|
||||
- ✅ **丰富插件:** 支持个性化插件扩展,已实现多角色切换、文字冒险、敏感词过滤、聊天记录总结、文档总结和对话、联网搜索等插件
|
||||
@@ -18,6 +18,10 @@
|
||||
3. 本项目主要接入协同办公平台,推荐使用公众号、企微自建应用、钉钉、飞书等接入通道,其他通道为历史产物已不维护
|
||||
4. 任何个人、团队和企业,无论以何种方式使用该项目、对何对象提供服务,所产生的一切后果,本项目均不承担任何责任
|
||||
|
||||
## 演示
|
||||
|
||||
DEMO视频:https://cdn.link-ai.tech/doc/cow_demo.mp4
|
||||
|
||||
## 社区
|
||||
|
||||
添加小助手微信加入开源项目交流群:
|
||||
@@ -42,8 +46,14 @@
|
||||
|
||||
# 🏷 更新日志
|
||||
|
||||
>**2024.06.20:** [1.6.7版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.6.7),MiniMax模型、工作流图片输入、模型列表完善
|
||||
>
|
||||
>**2024.09.26:** [1.7.2版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.2) 和 [1.7.1版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.1) 文心,讯飞等模型优化、o1 模型、快速安装和管理脚本
|
||||
|
||||
>**2024.08.02:** [1.7.0版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.0) 新增 讯飞4.0 模型、知识库引用来源展示、相关插件优化
|
||||
|
||||
>**2024.07.19:** [1.6.9版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.6.9) 新增 gpt-4o-mini 模型、阿里语音识别、企微应用渠道路由优化
|
||||
|
||||
>**2024.07.05:** [1.6.8版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.6.8) 和 [1.6.7版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.6.7),Claude3.5, Gemini 1.5 Pro, MiniMax模型、工作流图片输入、模型列表完善
|
||||
|
||||
>**2024.06.04:** [1.6.6版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.6.6) 和 [1.6.5版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.6.5),gpt-4o模型、钉钉流式卡片、讯飞语音识别/合成
|
||||
|
||||
>**2024.04.26:** [1.6.0版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.6.0),新增 Kimi 接入、gpt-4-turbo版本升级、文件总结和语音识别问题修复
|
||||
@@ -72,8 +82,13 @@
|
||||
|
||||
# 🚀 快速开始
|
||||
|
||||
快速开始详细文档:[项目搭建文档](https://docs.link-ai.tech/cow/quick-start)
|
||||
- 快速开始详细文档:[项目搭建文档](https://docs.link-ai.tech/cow/quick-start)
|
||||
|
||||
- 快速安装脚本,详细使用指导:[一键安装启动脚本](https://github.com/zhayujie/chatgpt-on-wechat/wiki/%E4%B8%80%E9%94%AE%E5%AE%89%E8%A3%85%E5%90%AF%E5%8A%A8%E8%84%9A%E6%9C%AC)
|
||||
```bash
|
||||
bash <(curl -sS https://cdn.link-ai.tech/code/cow/install.sh)
|
||||
```
|
||||
- 项目管理脚本,详细使用指导:[项目管理脚本](https://github.com/zhayujie/chatgpt-on-wechat/wiki/%E9%A1%B9%E7%9B%AE%E7%AE%A1%E7%90%86%E8%84%9A%E6%9C%AC)
|
||||
## 一、准备
|
||||
|
||||
### 1. 账号注册
|
||||
@@ -169,7 +184,7 @@ pip3 install -r requirements-optional.txt
|
||||
|
||||
**4.其他配置**
|
||||
|
||||
+ `model`: 模型名称,目前支持 `gpt-3.5-turbo`, `gpt-4o`, `gpt-4-turbo`, `gpt-4`, `wenxin` , `claude` , `gemini`, `glm-4`, `xunfei`, `moonshot`等,全部模型名称参考[common/const.py](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py)文件
|
||||
+ `model`: 模型名称,目前支持 `gpt-3.5-turbo`, `gpt-4o-mini`, `gpt-4o`, `gpt-4`, `wenxin` , `claude` , `gemini`, `glm-4`, `xunfei`, `moonshot`等,全部模型名称参考[common/const.py](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py)文件
|
||||
+ `temperature`,`frequency_penalty`,`presence_penalty`: Chat API接口参数,详情参考[OpenAI官方文档。](https://platform.openai.com/docs/api-reference/chat)
|
||||
+ `proxy`:由于目前 `openai` 接口国内无法访问,需配置代理客户端的地址,详情参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
|
||||
+ 对于图像生成,在满足个人或群组触发条件外,还需要额外的关键词前缀来触发,对应配置 `image_create_prefix `
|
||||
|
||||
@@ -19,6 +19,11 @@ class BaiduWenxinBot(Bot):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
wenxin_model = conf().get("baidu_wenxin_model")
|
||||
self.prompt_enabled = conf().get("baidu_wenxin_prompt_enabled")
|
||||
if self.prompt_enabled:
|
||||
self.prompt = conf().get("character_desc", "")
|
||||
if self.prompt == "":
|
||||
logger.warn("[BAIDU] Although you enabled model prompt, character_desc is not specified.")
|
||||
if wenxin_model is not None:
|
||||
wenxin_model = conf().get("baidu_wenxin_model") or "eb-instant"
|
||||
else:
|
||||
@@ -84,7 +89,7 @@ class BaiduWenxinBot(Bot):
|
||||
headers = {
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
payload = {'messages': session.messages}
|
||||
payload = {'messages': session.messages, 'system': self.prompt} if self.prompt_enabled else {'messages': session.messages}
|
||||
response = requests.request("POST", url, headers=headers, data=json.dumps(payload))
|
||||
response_text = json.loads(response.text)
|
||||
logger.info(f"[BAIDU] response text={response_text}")
|
||||
|
||||
@@ -5,7 +5,7 @@ import time
|
||||
import openai
|
||||
import openai.error
|
||||
import requests
|
||||
|
||||
from common import const
|
||||
from bot.bot import Bot
|
||||
from bot.chatgpt.chat_gpt_session import ChatGPTSession
|
||||
from bot.openai.open_ai_image import OpenAIImage
|
||||
@@ -15,7 +15,7 @@ from bridge.reply import Reply, ReplyType
|
||||
from common.log import logger
|
||||
from common.token_bucket import TokenBucket
|
||||
from config import conf, load_config
|
||||
|
||||
from bot.baidu.baidu_wenxin_session import BaiduWenxinSession
|
||||
|
||||
# OpenAI对话模型API (可用)
|
||||
class ChatGPTBot(Bot, OpenAIImage):
|
||||
@@ -30,10 +30,12 @@ class ChatGPTBot(Bot, OpenAIImage):
|
||||
openai.proxy = proxy
|
||||
if conf().get("rate_limit_chatgpt"):
|
||||
self.tb4chatgpt = TokenBucket(conf().get("rate_limit_chatgpt", 20))
|
||||
|
||||
conf_model = conf().get("model") or "gpt-3.5-turbo"
|
||||
self.sessions = SessionManager(ChatGPTSession, model=conf().get("model") or "gpt-3.5-turbo")
|
||||
# o1相关模型不支持system prompt,暂时用文心模型的session
|
||||
|
||||
self.args = {
|
||||
"model": conf().get("model") or "gpt-3.5-turbo", # 对话模型的名称
|
||||
"model": conf_model, # 对话模型的名称
|
||||
"temperature": conf().get("temperature", 0.9), # 值在[0,1]之间,越大表示回复越具有不确定性
|
||||
# "max_tokens":4096, # 回复最大的字符数
|
||||
"top_p": conf().get("top_p", 1),
|
||||
@@ -42,6 +44,12 @@ class ChatGPTBot(Bot, OpenAIImage):
|
||||
"request_timeout": conf().get("request_timeout", None), # 请求超时时间,openai接口默认设置为600,对于难问题一般需要较长时间
|
||||
"timeout": conf().get("request_timeout", None), # 重试超时时间,在这个时间内,将会自动重试
|
||||
}
|
||||
# o1相关模型固定了部分参数,暂时去掉
|
||||
if conf_model in [const.O1, const.O1_MINI]:
|
||||
self.sessions = SessionManager(BaiduWenxinSession, model=conf().get("model") or const.O1_MINI)
|
||||
remove_keys = ["temperature", "top_p", "frequency_penalty", "presence_penalty"]
|
||||
for key in remove_keys:
|
||||
self.args.pop(key, None) # 如果键不存在,使用 None 来避免抛出错误
|
||||
|
||||
def reply(self, query, context=None):
|
||||
# acquire reply content
|
||||
@@ -208,11 +216,33 @@ class AzureChatGPTBot(ChatGPTBot):
|
||||
headers = {"api-key": api_key, "Content-Type": "application/json"}
|
||||
try:
|
||||
body = {"prompt": query, "size": conf().get("image_create_size", "1024x1024"), "quality": conf().get("dalle3_image_quality", "standard")}
|
||||
submission = requests.post(url, headers=headers, json=body)
|
||||
image_url = submission.json()['data'][0]['url']
|
||||
return True, image_url
|
||||
response = requests.post(url, headers=headers, json=body)
|
||||
response.raise_for_status() # 检查请求是否成功
|
||||
data = response.json()
|
||||
|
||||
# 检查响应中是否包含图像 URL
|
||||
if 'data' in data and len(data['data']) > 0 and 'url' in data['data'][0]:
|
||||
image_url = data['data'][0]['url']
|
||||
return True, image_url
|
||||
else:
|
||||
error_message = "响应中没有图像 URL"
|
||||
logger.error(error_message)
|
||||
return False, "图片生成失败"
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
# 捕获所有请求相关的异常
|
||||
try:
|
||||
error_detail = response.json().get('error', {}).get('message', str(e))
|
||||
except ValueError:
|
||||
error_detail = str(e)
|
||||
error_message = f"{error_detail}"
|
||||
logger.error(error_message)
|
||||
return False, error_message
|
||||
|
||||
except Exception as e:
|
||||
logger.error("create image error: {}".format(e))
|
||||
# 捕获所有其他异常
|
||||
error_message = f"生成图像时发生错误: {e}"
|
||||
logger.error(error_message)
|
||||
return False, "图片生成失败"
|
||||
else:
|
||||
return False, "图片生成失败,未配置text_to_image参数"
|
||||
|
||||
@@ -67,7 +67,7 @@ def num_tokens_from_messages(messages, model):
|
||||
elif model in ["gpt-4-0314", "gpt-4-0613", "gpt-4-32k", "gpt-4-32k-0613", "gpt-3.5-turbo-0613",
|
||||
"gpt-3.5-turbo-16k", "gpt-3.5-turbo-16k-0613", "gpt-35-turbo-16k", "gpt-4-turbo-preview",
|
||||
"gpt-4-1106-preview", const.GPT4_TURBO_PREVIEW, const.GPT4_VISION_PREVIEW, const.GPT4_TURBO_01_25,
|
||||
const.GPT_4o, const.LINKAI_4o, const.LINKAI_4_TURBO]:
|
||||
const.GPT_4o, const.GPT_4O_0806, const.GPT_4o_MINI, const.LINKAI_4o, const.LINKAI_4_TURBO]:
|
||||
return num_tokens_from_messages(messages, model="gpt-4")
|
||||
elif model.startswith("claude-3"):
|
||||
return num_tokens_from_messages(messages, model="gpt-3.5-turbo")
|
||||
|
||||
@@ -130,4 +130,6 @@ class ClaudeAPIBot(Bot, OpenAIImage):
|
||||
return "claude-3-sonnet-20240229"
|
||||
elif model == "claude-3-haiku":
|
||||
return "claude-3-haiku-20240307"
|
||||
elif model == "claude-3.5-sonnet":
|
||||
return "claude-3-5-sonnet-20240620"
|
||||
return model
|
||||
|
||||
@@ -13,7 +13,9 @@ from bridge.context import ContextType, Context
|
||||
from bridge.reply import Reply, ReplyType
|
||||
from common.log import logger
|
||||
from config import conf
|
||||
from bot.chatgpt.chat_gpt_session import ChatGPTSession
|
||||
from bot.baidu.baidu_wenxin_session import BaiduWenxinSession
|
||||
from google.generativeai.types import HarmCategory, HarmBlockThreshold
|
||||
|
||||
|
||||
# OpenAI对话模型API (可用)
|
||||
@@ -22,9 +24,11 @@ class GoogleGeminiBot(Bot):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.api_key = conf().get("gemini_api_key")
|
||||
# 复用文心的token计算方式
|
||||
self.sessions = SessionManager(BaiduWenxinSession, model=conf().get("model") or "gpt-3.5-turbo")
|
||||
|
||||
# 复用chatGPT的token计算方式
|
||||
self.sessions = SessionManager(ChatGPTSession, model=conf().get("model") or "gpt-3.5-turbo")
|
||||
self.model = conf().get("model") or "gemini-pro"
|
||||
if self.model == "gemini":
|
||||
self.model = "gemini-pro"
|
||||
def reply(self, query, context: Context = None) -> Reply:
|
||||
try:
|
||||
if context.type != ContextType.TEXT:
|
||||
@@ -34,18 +38,44 @@ class GoogleGeminiBot(Bot):
|
||||
session_id = context["session_id"]
|
||||
session = self.sessions.session_query(query, session_id)
|
||||
gemini_messages = self._convert_to_gemini_messages(self.filter_messages(session.messages))
|
||||
logger.debug(f"[Gemini] messages={gemini_messages}")
|
||||
genai.configure(api_key=self.api_key)
|
||||
model = genai.GenerativeModel('gemini-pro')
|
||||
response = model.generate_content(gemini_messages)
|
||||
reply_text = response.text
|
||||
self.sessions.session_reply(reply_text, session_id)
|
||||
logger.info(f"[Gemini] reply={reply_text}")
|
||||
return Reply(ReplyType.TEXT, reply_text)
|
||||
model = genai.GenerativeModel(self.model)
|
||||
|
||||
# 添加安全设置
|
||||
safety_settings = {
|
||||
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
|
||||
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
|
||||
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
|
||||
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
|
||||
}
|
||||
|
||||
# 生成回复,包含安全设置
|
||||
response = model.generate_content(
|
||||
gemini_messages,
|
||||
safety_settings=safety_settings
|
||||
)
|
||||
if response.candidates and response.candidates[0].content:
|
||||
reply_text = response.candidates[0].content.parts[0].text
|
||||
logger.info(f"[Gemini] reply={reply_text}")
|
||||
self.sessions.session_reply(reply_text, session_id)
|
||||
return Reply(ReplyType.TEXT, reply_text)
|
||||
else:
|
||||
# 没有有效响应内容,可能内容被屏蔽,输出安全评分
|
||||
logger.warning("[Gemini] No valid response generated. Checking safety ratings.")
|
||||
if hasattr(response, 'candidates') and response.candidates:
|
||||
for rating in response.candidates[0].safety_ratings:
|
||||
logger.warning(f"Safety rating: {rating.category} - {rating.probability}")
|
||||
error_message = "No valid response generated due to safety constraints."
|
||||
self.sessions.session_reply(error_message, session_id)
|
||||
return Reply(ReplyType.ERROR, error_message)
|
||||
|
||||
except Exception as e:
|
||||
logger.error("[Gemini] fetch reply error, may contain unsafe content")
|
||||
logger.error(e)
|
||||
return Reply(ReplyType.ERROR, "invoke [Gemini] api failed!")
|
||||
|
||||
logger.error(f"[Gemini] Error generating response: {str(e)}", exc_info=True)
|
||||
error_message = "Failed to invoke [Gemini] api!"
|
||||
self.sessions.session_reply(error_message, session_id)
|
||||
return Reply(ReplyType.ERROR, error_message)
|
||||
|
||||
def _convert_to_gemini_messages(self, messages: list):
|
||||
res = []
|
||||
for msg in messages:
|
||||
@@ -53,6 +83,8 @@ class GoogleGeminiBot(Bot):
|
||||
role = "user"
|
||||
elif msg.get("role") == "assistant":
|
||||
role = "model"
|
||||
elif msg.get("role") == "system":
|
||||
role = "user"
|
||||
else:
|
||||
continue
|
||||
res.append({
|
||||
@@ -69,7 +101,11 @@ class GoogleGeminiBot(Bot):
|
||||
return res
|
||||
for i in range(len(messages) - 1, -1, -1):
|
||||
message = messages[i]
|
||||
if message.get("role") != turn:
|
||||
role = message.get("role")
|
||||
if role == "system":
|
||||
res.insert(0, message)
|
||||
continue
|
||||
if role != turn:
|
||||
continue
|
||||
res.insert(0, message)
|
||||
if turn == "user":
|
||||
|
||||
@@ -399,6 +399,7 @@ class LinkAIBot(Bot):
|
||||
return
|
||||
max_send_num = conf().get("max_media_send_count")
|
||||
send_interval = conf().get("media_send_interval")
|
||||
file_type = (".pdf", ".doc", ".docx", ".csv", ".xls", ".xlsx", ".txt", ".rtf", ".ppt", ".pptx")
|
||||
try:
|
||||
i = 0
|
||||
for url in image_urls:
|
||||
@@ -407,7 +408,7 @@ class LinkAIBot(Bot):
|
||||
i += 1
|
||||
if url.endswith(".mp4"):
|
||||
reply_type = ReplyType.VIDEO_URL
|
||||
elif url.endswith(".pdf") or url.endswith(".doc") or url.endswith(".docx") or url.endswith(".csv"):
|
||||
elif url.endswith(file_type):
|
||||
reply_type = ReplyType.FILE
|
||||
url = _download_file(url)
|
||||
if not url:
|
||||
|
||||
@@ -19,8 +19,11 @@ class MoonshotBot(Bot):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.sessions = SessionManager(MoonshotSession, model=conf().get("model") or "moonshot-v1-128k")
|
||||
model = conf().get("model") or "moonshot-v1-128k"
|
||||
if model == "moonshot":
|
||||
model = "moonshot-v1-32k"
|
||||
self.args = {
|
||||
"model": conf().get("model") or "moonshot-v1-128k", # 对话模型的名称
|
||||
"model": model, # 对话模型的名称
|
||||
"temperature": conf().get("temperature", 0.3), # 如果设置,值域须为 [0, 1] 我们推荐 0.3,以达到较合适的效果。
|
||||
"top_p": conf().get("top_p", 1.0), # 使用默认值
|
||||
}
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
import requests, json
|
||||
from bot.bot import Bot
|
||||
from bot.session_manager import SessionManager
|
||||
from bot.baidu.baidu_wenxin_session import BaiduWenxinSession
|
||||
from bot.chatgpt.chat_gpt_session import ChatGPTSession
|
||||
from bridge.context import ContextType, Context
|
||||
from bridge.reply import Reply, ReplyType
|
||||
from common.log import logger
|
||||
@@ -41,18 +41,19 @@ class XunFeiBot(Bot):
|
||||
self.api_key = conf().get("xunfei_api_key")
|
||||
self.api_secret = conf().get("xunfei_api_secret")
|
||||
# 默认使用v2.0版本: "generalv2"
|
||||
# v1.5版本为 "general"
|
||||
# v3.0版本为: "generalv3"
|
||||
self.domain = "generalv3"
|
||||
# 默认使用v2.0版本: "ws://spark-api.xf-yun.com/v2.1/chat"
|
||||
# v1.5版本为: "ws://spark-api.xf-yun.com/v1.1/chat"
|
||||
# v3.0版本为: "ws://spark-api.xf-yun.com/v3.1/chat"
|
||||
# v3.5版本为: "wss://spark-api.xf-yun.com/v3.5/chat"
|
||||
self.spark_url = "wss://spark-api.xf-yun.com/v3.5/chat"
|
||||
# Spark Lite请求地址(spark_url): wss://spark-api.xf-yun.com/v1.1/chat, 对应的domain参数为: "general"
|
||||
# Spark V2.0请求地址(spark_url): wss://spark-api.xf-yun.com/v2.1/chat, 对应的domain参数为: "generalv2"
|
||||
# Spark Pro 请求地址(spark_url): wss://spark-api.xf-yun.com/v3.1/chat, 对应的domain参数为: "generalv3"
|
||||
# Spark Pro-128K请求地址(spark_url): wss://spark-api.xf-yun.com/chat/pro-128k, 对应的domain参数为: "pro-128k"
|
||||
# Spark Max 请求地址(spark_url): wss://spark-api.xf-yun.com/v3.5/chat, 对应的domain参数为: "generalv3.5"
|
||||
# Spark4.0 Ultra 请求地址(spark_url): wss://spark-api.xf-yun.com/v4.0/chat, 对应的domain参数为: "4.0Ultra"
|
||||
# 后续模型更新,对应的参数可以参考官网文档获取:https://www.xfyun.cn/doc/spark/Web.html
|
||||
self.domain = conf().get("xunfei_domain", "generalv3.5")
|
||||
self.spark_url = conf().get("xunfei_spark_url", "wss://spark-api.xf-yun.com/v3.5/chat")
|
||||
self.host = urlparse(self.spark_url).netloc
|
||||
self.path = urlparse(self.spark_url).path
|
||||
# 和wenxin使用相同的session机制
|
||||
self.sessions = SessionManager(BaiduWenxinSession, model=const.XUNFEI)
|
||||
self.sessions = SessionManager(ChatGPTSession, model=const.XUNFEI)
|
||||
|
||||
def reply(self, query, context: Context = None) -> Reply:
|
||||
if context.type == ContextType.TEXT:
|
||||
|
||||
+3
-3
@@ -36,9 +36,9 @@ class Bridge(object):
|
||||
self.btype["chat"] = const.QWEN
|
||||
if model_type in [const.QWEN_TURBO, const.QWEN_PLUS, const.QWEN_MAX]:
|
||||
self.btype["chat"] = const.QWEN_DASHSCOPE
|
||||
if model_type in [const.GEMINI]:
|
||||
if model_type and model_type.startswith("gemini"):
|
||||
self.btype["chat"] = const.GEMINI
|
||||
if model_type in [const.ZHIPU_AI]:
|
||||
if model_type and model_type.startswith("glm"):
|
||||
self.btype["chat"] = const.ZHIPU_AI
|
||||
if model_type and model_type.startswith("claude-3"):
|
||||
self.btype["chat"] = const.CLAUDEAPI
|
||||
@@ -46,7 +46,7 @@ class Bridge(object):
|
||||
if model_type in ["claude"]:
|
||||
self.btype["chat"] = const.CLAUDEAI
|
||||
|
||||
if model_type in ["moonshot-v1-8k", "moonshot-v1-32k", "moonshot-v1-128k"]:
|
||||
if model_type in [const.MOONSHOT, "moonshot-v1-8k", "moonshot-v1-32k", "moonshot-v1-128k"]:
|
||||
self.btype["chat"] = const.MOONSHOT
|
||||
|
||||
if model_type in ["abab6.5-chat"]:
|
||||
|
||||
@@ -117,6 +117,7 @@ class ChatChannel(Channel):
|
||||
logger.info("[chat_channel]receive group at")
|
||||
if not conf().get("group_at_off", False):
|
||||
flag = True
|
||||
self.name = self.name if self.name is not None else "" # 部分渠道self.name可能没有赋值
|
||||
pattern = f"@{re.escape(self.name)}(\u2005|\u0020)"
|
||||
subtract_res = re.sub(pattern, r"", content)
|
||||
if isinstance(context["msg"].at_list, list):
|
||||
|
||||
@@ -100,7 +100,7 @@ class DingTalkChanel(ChatChannel, dingtalk_stream.ChatbotHandler):
|
||||
super(dingtalk_stream.ChatbotHandler, self).__init__()
|
||||
self.logger = self.setup_logger()
|
||||
# 历史消息id暂存,用于幂等控制
|
||||
self.receivedMsgs = ExpiredDict(conf().get("expires_in_seconds"))
|
||||
self.receivedMsgs = ExpiredDict(conf().get("expires_in_seconds", 3600))
|
||||
logger.info("[DingTalk] client_id={}, client_secret={} ".format(
|
||||
self.dingtalk_client_id, self.dingtalk_client_secret))
|
||||
# 无需群校验和前缀
|
||||
@@ -163,7 +163,7 @@ class DingTalkChanel(ChatChannel, dingtalk_stream.ChatbotHandler):
|
||||
elif cmsg.ctype == ContextType.PATPAT:
|
||||
logger.debug("[DingTalk]receive patpat msg: {}".format(cmsg.content))
|
||||
elif cmsg.ctype == ContextType.TEXT:
|
||||
logger.debug("[DingTalk]receive patpat msg: {}".format(cmsg.content))
|
||||
logger.debug("[DingTalk]receive text msg: {}".format(cmsg.content))
|
||||
else:
|
||||
logger.debug("[DingTalk]receive other msg: {}".format(cmsg.content))
|
||||
context = self._compose_context(cmsg.ctype, cmsg.content, isgroup=True, msg=cmsg)
|
||||
|
||||
@@ -49,6 +49,7 @@ class DingTalkMessage(ChatMessage):
|
||||
if self.is_group:
|
||||
self.from_user_id = event.conversation_id
|
||||
self.actual_user_id = event.sender_id
|
||||
self.is_at = True
|
||||
else:
|
||||
self.from_user_id = event.sender_id
|
||||
self.actual_user_id = event.sender_id
|
||||
|
||||
@@ -9,7 +9,6 @@ import json
|
||||
import os
|
||||
import threading
|
||||
import time
|
||||
|
||||
import requests
|
||||
|
||||
from bridge.context import *
|
||||
@@ -21,6 +20,7 @@ from common.expired_dict import ExpiredDict
|
||||
from common.log import logger
|
||||
from common.singleton import singleton
|
||||
from common.time_check import time_checker
|
||||
from common.utils import convert_webp_to_png
|
||||
from config import conf, get_appdata_dir
|
||||
from lib import itchat
|
||||
from lib.itchat.content import *
|
||||
@@ -100,7 +100,10 @@ def qrCallback(uuid, status, qrcode):
|
||||
qr = qrcode.QRCode(border=1)
|
||||
qr.add_data(url)
|
||||
qr.make(fit=True)
|
||||
qr.print_ascii(invert=True)
|
||||
try:
|
||||
qr.print_ascii(invert=True)
|
||||
except UnicodeEncodeError:
|
||||
print("ASCII QR code printing failed due to encoding issues.")
|
||||
|
||||
|
||||
@singleton
|
||||
@@ -109,7 +112,7 @@ class WechatChannel(ChatChannel):
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.receivedMsgs = ExpiredDict(conf().get("expires_in_seconds"))
|
||||
self.receivedMsgs = ExpiredDict(conf().get("expires_in_seconds", 3600))
|
||||
self.auto_login_times = 0
|
||||
|
||||
def startup(self):
|
||||
@@ -202,7 +205,7 @@ class WechatChannel(ChatChannel):
|
||||
logger.debug(f"[WX]receive attachment msg, file_name={cmsg.content}")
|
||||
else:
|
||||
logger.debug("[WX]receive group msg: {}".format(cmsg.content))
|
||||
context = self._compose_context(cmsg.ctype, cmsg.content, isgroup=True, msg=cmsg)
|
||||
context = self._compose_context(cmsg.ctype, cmsg.content, isgroup=True, msg=cmsg, no_need_at=conf().get("no_need_at", False))
|
||||
if context:
|
||||
self.produce(context)
|
||||
|
||||
@@ -229,6 +232,12 @@ class WechatChannel(ChatChannel):
|
||||
image_storage.write(block)
|
||||
logger.info(f"[WX] download image success, size={size}, img_url={img_url}")
|
||||
image_storage.seek(0)
|
||||
if ".webp" in img_url:
|
||||
try:
|
||||
image_storage = convert_webp_to_png(image_storage)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to convert image: {e}")
|
||||
return
|
||||
itchat.send_image(image_storage, toUserName=receiver)
|
||||
logger.info("[WX] sendImage url={}, receiver={}".format(img_url, receiver))
|
||||
elif reply.type == ReplyType.IMAGE: # 从文件读取图片
|
||||
@@ -266,6 +275,7 @@ def _send_login_success():
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
|
||||
def _send_logout():
|
||||
try:
|
||||
from common.linkai_client import chat_client
|
||||
@@ -274,6 +284,7 @@ def _send_logout():
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
|
||||
def _send_qr_code(qrcode_list: list):
|
||||
try:
|
||||
from common.linkai_client import chat_client
|
||||
@@ -281,3 +292,4 @@ def _send_qr_code(qrcode_list: list):
|
||||
chat_client.send_qrcode(qrcode_list)
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
|
||||
@@ -14,6 +14,11 @@ class WechatMessage(ChatMessage):
|
||||
self.create_time = itchat_msg["CreateTime"]
|
||||
self.is_group = is_group
|
||||
|
||||
notes_join_group = ["加入群聊", "加入了群聊", "invited", "joined"] # 可通过添加对应语言的加入群聊通知中的关键词适配更多
|
||||
notes_bot_join_group = ["邀请你", "invited you", "You've joined", "你通过扫描"]
|
||||
notes_exit_group = ["移出了群聊", "removed"] # 可通过添加对应语言的踢出群聊通知中的关键词适配更多
|
||||
notes_patpat = ["拍了拍我", "tickled my", "tickled me"] # 可通过添加对应语言的拍一拍通知中的关键词适配更多
|
||||
|
||||
if itchat_msg["Type"] == TEXT:
|
||||
self.ctype = ContextType.TEXT
|
||||
self.content = itchat_msg["Text"]
|
||||
@@ -26,30 +31,42 @@ class WechatMessage(ChatMessage):
|
||||
self.content = TmpDir().path() + itchat_msg["FileName"] # content直接存临时目录路径
|
||||
self._prepare_fn = lambda: itchat_msg.download(self.content)
|
||||
elif itchat_msg["Type"] == NOTE and itchat_msg["MsgType"] == 10000:
|
||||
if is_group and ("加入群聊" in itchat_msg["Content"] or "加入了群聊" in itchat_msg["Content"]):
|
||||
if is_group:
|
||||
if any(note_bot_join_group in itchat_msg["Content"] for note_bot_join_group in notes_bot_join_group): # 邀请机器人加入群聊
|
||||
logger.warn("机器人加入群聊消息,不处理~")
|
||||
pass
|
||||
elif any(note_join_group in itchat_msg["Content"] for note_join_group in notes_join_group): # 若有任何在notes_join_group列表中的字符串出现在NOTE中
|
||||
# 这里只能得到nickname, actual_user_id还是机器人的id
|
||||
if "加入了群聊" in itchat_msg["Content"]:
|
||||
self.ctype = ContextType.JOIN_GROUP
|
||||
self.content = itchat_msg["Content"]
|
||||
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[-1]
|
||||
elif "加入群聊" in itchat_msg["Content"]:
|
||||
self.ctype = ContextType.JOIN_GROUP
|
||||
if "加入群聊" not in itchat_msg["Content"]:
|
||||
self.ctype = ContextType.JOIN_GROUP
|
||||
self.content = itchat_msg["Content"]
|
||||
if "invited" in itchat_msg["Content"]: # 匹配英文信息
|
||||
self.actual_user_nickname = re.findall(r'invited\s+(.+?)\s+to\s+the\s+group\s+chat', itchat_msg["Content"])[0]
|
||||
elif "joined" in itchat_msg["Content"]: # 匹配通过二维码加入的英文信息
|
||||
self.actual_user_nickname = re.findall(r'"(.*?)" joined the group chat via the QR Code shared by', itchat_msg["Content"])[0]
|
||||
elif "加入了群聊" in itchat_msg["Content"]:
|
||||
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[-1]
|
||||
elif "加入群聊" in itchat_msg["Content"]:
|
||||
self.ctype = ContextType.JOIN_GROUP
|
||||
self.content = itchat_msg["Content"]
|
||||
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[0]
|
||||
|
||||
elif any(note_exit_group in itchat_msg["Content"] for note_exit_group in notes_exit_group): # 若有任何在notes_exit_group列表中的字符串出现在NOTE中
|
||||
self.ctype = ContextType.EXIT_GROUP
|
||||
self.content = itchat_msg["Content"]
|
||||
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[0]
|
||||
|
||||
elif is_group and ("移出了群聊" in itchat_msg["Content"]):
|
||||
self.ctype = ContextType.EXIT_GROUP
|
||||
self.content = itchat_msg["Content"]
|
||||
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[0]
|
||||
|
||||
elif "你已添加了" in itchat_msg["Content"]: #通过好友请求
|
||||
self.ctype = ContextType.ACCEPT_FRIEND
|
||||
self.content = itchat_msg["Content"]
|
||||
elif "拍了拍我" in itchat_msg["Content"]:
|
||||
elif any(note_patpat in itchat_msg["Content"] for note_patpat in notes_patpat): # 若有任何在notes_patpat列表中的字符串出现在NOTE中:
|
||||
self.ctype = ContextType.PATPAT
|
||||
self.content = itchat_msg["Content"]
|
||||
if is_group:
|
||||
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[0]
|
||||
if "拍了拍我" in itchat_msg["Content"]: # 识别中文
|
||||
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[0]
|
||||
elif ("tickled my" in itchat_msg["Content"] or "tickled me" in itchat_msg["Content"]):
|
||||
self.actual_user_nickname = re.findall(r'^(.*?)(?:tickled my|tickled me)', itchat_msg["Content"])[0]
|
||||
else:
|
||||
raise NotImplementedError("Unsupported note message: " + itchat_msg["Content"])
|
||||
elif itchat_msg["Type"] == ATTACHMENT:
|
||||
|
||||
@@ -17,7 +17,7 @@ from channel.wechatcom.wechatcomapp_client import WechatComAppClient
|
||||
from channel.wechatcom.wechatcomapp_message import WechatComAppMessage
|
||||
from common.log import logger
|
||||
from common.singleton import singleton
|
||||
from common.utils import compress_imgfile, fsize, split_string_by_utf8_length
|
||||
from common.utils import compress_imgfile, fsize, split_string_by_utf8_length, convert_webp_to_png
|
||||
from config import conf, subscribe_msg
|
||||
from voice.audio_convert import any_to_amr, split_audio
|
||||
|
||||
@@ -44,7 +44,7 @@ class WechatComAppChannel(ChatChannel):
|
||||
|
||||
def startup(self):
|
||||
# start message listener
|
||||
urls = ("/wxcomapp", "channel.wechatcom.wechatcomapp_channel.Query")
|
||||
urls = ("/wxcomapp/?", "channel.wechatcom.wechatcomapp_channel.Query")
|
||||
app = web.application(urls, globals(), autoreload=False)
|
||||
port = conf().get("wechatcomapp_port", 9898)
|
||||
web.httpserver.runsimple(app.wsgifunc(), ("0.0.0.0", port))
|
||||
@@ -99,6 +99,12 @@ class WechatComAppChannel(ChatChannel):
|
||||
image_storage = compress_imgfile(image_storage, 10 * 1024 * 1024 - 1)
|
||||
logger.info("[wechatcom] image compressed, sz={}".format(fsize(image_storage)))
|
||||
image_storage.seek(0)
|
||||
if ".webp" in img_url:
|
||||
try:
|
||||
image_storage = convert_webp_to_png(image_storage)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to convert image: {e}")
|
||||
return
|
||||
try:
|
||||
response = self.client.media.upload("image", image_storage)
|
||||
logger.debug("[wechatcom] upload image response: {}".format(response))
|
||||
@@ -156,11 +162,12 @@ class Query:
|
||||
logger.debug("[wechatcom] receive message: {}, msg= {}".format(message, msg))
|
||||
if msg.type == "event":
|
||||
if msg.event == "subscribe":
|
||||
reply_content = subscribe_msg()
|
||||
if reply_content:
|
||||
reply = create_reply(reply_content, msg).render()
|
||||
res = channel.crypto.encrypt_message(reply, nonce, timestamp)
|
||||
return res
|
||||
pass
|
||||
# reply_content = subscribe_msg()
|
||||
# if reply_content:
|
||||
# reply = create_reply(reply_content, msg).render()
|
||||
# res = channel.crypto.encrypt_message(reply, nonce, timestamp)
|
||||
# return res
|
||||
else:
|
||||
try:
|
||||
wechatcom_msg = WechatComAppMessage(msg, client=channel.client)
|
||||
|
||||
+24
-5
@@ -11,7 +11,7 @@ QWEN = "qwen" # 旧版通义模型
|
||||
QWEN_DASHSCOPE = "dashscope" # 通义新版sdk和api key
|
||||
|
||||
|
||||
GEMINI = "gemini"
|
||||
GEMINI = "gemini" # gemini-1.0-pro
|
||||
ZHIPU_AI = "glm-4"
|
||||
MOONSHOT = "moonshot"
|
||||
MiniMax = "minimax"
|
||||
@@ -24,6 +24,7 @@ GPT35_0125 = "gpt-3.5-turbo-0125"
|
||||
GPT35_1106 = "gpt-3.5-turbo-1106"
|
||||
|
||||
GPT_4o = "gpt-4o"
|
||||
GPT_4O_0806 = "gpt-4o-2024-08-06"
|
||||
GPT4_TURBO = "gpt-4-turbo"
|
||||
GPT4_TURBO_PREVIEW = "gpt-4-turbo-preview"
|
||||
GPT4_TURBO_04_09 = "gpt-4-turbo-2024-04-09"
|
||||
@@ -32,10 +33,14 @@ GPT4_TURBO_11_06 = "gpt-4-1106-preview"
|
||||
GPT4_VISION_PREVIEW = "gpt-4-vision-preview"
|
||||
|
||||
GPT4 = "gpt-4"
|
||||
GPT_4o_MINI = "gpt-4o-mini"
|
||||
GPT4_32k = "gpt-4-32k"
|
||||
GPT4_06_13 = "gpt-4-0613"
|
||||
GPT4_32k_06_13 = "gpt-4-32k-0613"
|
||||
|
||||
O1 = "o1-preview"
|
||||
O1_MINI = "o1-mini"
|
||||
|
||||
WHISPER_1 = "whisper-1"
|
||||
TTS_1 = "tts-1"
|
||||
TTS_1_HD = "tts-1-hd"
|
||||
@@ -51,16 +56,30 @@ LINKAI_35 = "linkai-3.5"
|
||||
LINKAI_4_TURBO = "linkai-4-turbo"
|
||||
LINKAI_4o = "linkai-4o"
|
||||
|
||||
GEMINI_PRO = "gemini-1.0-pro"
|
||||
GEMINI_15_flash = "gemini-1.5-flash"
|
||||
GEMINI_15_PRO = "gemini-1.5-pro"
|
||||
|
||||
GLM_4 = "glm-4"
|
||||
GLM_4_PLUS = "glm-4-plus"
|
||||
GLM_4_flash = "glm-4-flash"
|
||||
GLM_4_LONG = "glm-4-long"
|
||||
GLM_4_ALLTOOLS = "glm-4-alltools"
|
||||
GLM_4_0520 = "glm-4-0520"
|
||||
GLM_4_AIR = "glm-4-air"
|
||||
GLM_4_AIRX = "glm-4-airx"
|
||||
|
||||
MODEL_LIST = [
|
||||
GPT35, GPT35_0125, GPT35_1106, "gpt-3.5-turbo-16k",
|
||||
GPT_4o, GPT4_TURBO, GPT4_TURBO_PREVIEW, GPT4_TURBO_01_25, GPT4_TURBO_11_06, GPT4, GPT4_32k, GPT4_06_13, GPT4_32k_06_13,
|
||||
O1, O1_MINI, GPT_4o, GPT_4O_0806, GPT_4o_MINI, GPT4_TURBO, GPT4_TURBO_PREVIEW, GPT4_TURBO_01_25, GPT4_TURBO_11_06, GPT4, GPT4_32k, GPT4_06_13, GPT4_32k_06_13,
|
||||
WEN_XIN, WEN_XIN_4,
|
||||
XUNFEI, GEMINI, ZHIPU_AI, MOONSHOT,
|
||||
"claude", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-opus-20240229",
|
||||
XUNFEI,
|
||||
ZHIPU_AI, GLM_4, GLM_4_PLUS, GLM_4_flash, GLM_4_LONG, GLM_4_ALLTOOLS, GLM_4_0520, GLM_4_AIR, GLM_4_AIRX,
|
||||
MOONSHOT, MiniMax,
|
||||
GEMINI, GEMINI_PRO, GEMINI_15_flash, GEMINI_15_PRO,
|
||||
"claude", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-opus-20240229", "claude-3.5-sonnet",
|
||||
"moonshot-v1-8k", "moonshot-v1-32k", "moonshot-v1-128k",
|
||||
QWEN, QWEN_TURBO, QWEN_PLUS, QWEN_MAX,
|
||||
MiniMax,
|
||||
LINKAI_35, LINKAI_4_TURBO, LINKAI_4o
|
||||
]
|
||||
|
||||
|
||||
+10
-2
@@ -42,11 +42,19 @@ class ChatClient(LinkAIClient):
|
||||
if reply_voice_mode:
|
||||
if reply_voice_mode == "voice_reply_voice":
|
||||
local_config["voice_reply_voice"] = True
|
||||
local_config["always_reply_voice"] = False
|
||||
elif reply_voice_mode == "always_reply_voice":
|
||||
local_config["always_reply_voice"] = True
|
||||
local_config["voice_reply_voice"] = True
|
||||
elif reply_voice_mode == "no_reply_voice":
|
||||
local_config["always_reply_voice"] = False
|
||||
local_config["voice_reply_voice"] = False
|
||||
|
||||
if config.get("admin_password") and plugin_config.get("Godcmd"):
|
||||
plugin_config["Godcmd"]["password"] = config.get("admin_password")
|
||||
if config.get("admin_password"):
|
||||
if not plugin_config.get("Godcmd"):
|
||||
plugin_config["Godcmd"] = {"password": config.get("admin_password"), "admin_users": []}
|
||||
else:
|
||||
plugin_config["Godcmd"]["password"] = config.get("admin_password")
|
||||
PluginManager().instances["GODCMD"].reload()
|
||||
|
||||
if config.get("group_app_map") and pconf("linkai"):
|
||||
|
||||
+15
-1
@@ -2,7 +2,7 @@ import io
|
||||
import os
|
||||
from urllib.parse import urlparse
|
||||
from PIL import Image
|
||||
|
||||
from common.log import logger
|
||||
|
||||
def fsize(file):
|
||||
if isinstance(file, io.BytesIO):
|
||||
@@ -54,3 +54,17 @@ def split_string_by_utf8_length(string, max_length, max_split=0):
|
||||
def get_path_suffix(path):
|
||||
path = urlparse(path).path
|
||||
return os.path.splitext(path)[-1].lstrip('.')
|
||||
|
||||
|
||||
def convert_webp_to_png(webp_image):
|
||||
from PIL import Image
|
||||
try:
|
||||
webp_image.seek(0)
|
||||
img = Image.open(webp_image).convert("RGBA")
|
||||
png_image = io.BytesIO()
|
||||
img.save(png_image, format="PNG")
|
||||
png_image.seek(0)
|
||||
return png_image
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to convert WEBP to PNG: {e}")
|
||||
raise
|
||||
|
||||
@@ -17,7 +17,7 @@ available_setting = {
|
||||
"open_ai_api_base": "https://api.openai.com/v1",
|
||||
"proxy": "", # openai使用的代理
|
||||
# chatgpt模型, 当use_azure_chatgpt为true时,其名称为Azure上model deployment名称
|
||||
"model": "gpt-3.5-turbo", # 支持ChatGPT、Claude、Gemini、文心一言、通义千问、Kimi、讯飞星火、智谱、LinkAI等模型,模型具体名称详见common/const.py文件列出的模型
|
||||
"model": "gpt-3.5-turbo", # 可选择: gpt-4o, pt-4o-mini, gpt-4-turbo, claude-3-sonnet, wenxin, moonshot, qwen-turbo, xunfei, glm-4, minimax, gemini等模型,全部可选模型详见common/const.py文件
|
||||
"bot_type": "", # 可选配置,使用兼容openai格式的三方服务时候,需填"chatGPT"。bot具体名称详见common/const.py文件列出的bot_type,如不填根据model名称判断,
|
||||
"use_azure_chatgpt": False, # 是否使用azure的chatgpt
|
||||
"azure_deployment_id": "", # azure 模型部署名称
|
||||
@@ -27,6 +27,7 @@ available_setting = {
|
||||
"single_chat_reply_prefix": "[bot] ", # 私聊时自动回复的前缀,用于区分真人
|
||||
"single_chat_reply_suffix": "", # 私聊时自动回复的后缀,\n 可以换行
|
||||
"group_chat_prefix": ["@bot"], # 群聊时包含该前缀则会触发机器人回复
|
||||
"no_need_at": False, # 群聊回复时是否不需要艾特
|
||||
"group_chat_reply_prefix": "", # 群聊时自动回复的前缀
|
||||
"group_chat_reply_suffix": "", # 群聊时自动回复的后缀,\n 可以换行
|
||||
"group_chat_keyword": [], # 群聊时包含该关键词则会触发机器人回复
|
||||
@@ -69,10 +70,13 @@ available_setting = {
|
||||
"baidu_wenxin_model": "eb-instant", # 默认使用ERNIE-Bot-turbo模型
|
||||
"baidu_wenxin_api_key": "", # Baidu api key
|
||||
"baidu_wenxin_secret_key": "", # Baidu secret key
|
||||
"baidu_wenxin_prompt_enabled": False, # Enable prompt if you are using ernie character model
|
||||
# 讯飞星火API
|
||||
"xunfei_app_id": "", # 讯飞应用ID
|
||||
"xunfei_api_key": "", # 讯飞 API key
|
||||
"xunfei_api_secret": "", # 讯飞 API secret
|
||||
"xunfei_domain": "", # 讯飞模型对应的domain参数,Spark4.0 Ultra为 4.0Ultra,其他模型详见: https://www.xfyun.cn/doc/spark/Web.html
|
||||
"xunfei_spark_url": "", # 讯飞模型对应的请求地址,Spark4.0 Ultra为 wss://spark-api.xf-yun.com/v4.0/chat,其他模型参考详见: https://www.xfyun.cn/doc/spark/Web.html
|
||||
# claude 配置
|
||||
"claude_api_cookie": "",
|
||||
"claude_uuid": "",
|
||||
@@ -95,8 +99,8 @@ available_setting = {
|
||||
"group_speech_recognition": False, # 是否开启群组语音识别
|
||||
"voice_reply_voice": False, # 是否使用语音回复语音,需要设置对应语音合成引擎的api key
|
||||
"always_reply_voice": False, # 是否一直使用语音回复
|
||||
"voice_to_text": "openai", # 语音识别引擎,支持openai,baidu,google,azure
|
||||
"text_to_voice": "openai", # 语音合成引擎,支持openai,baidu,google,pytts(offline),azure,elevenlabs,edge(online)
|
||||
"voice_to_text": "openai", # 语音识别引擎,支持openai,baidu,google,azure,xunfei,ali
|
||||
"text_to_voice": "openai", # 语音合成引擎,支持openai,baidu,google,azure,xunfei,ali,pytts(offline),elevenlabs,edge(online)
|
||||
"text_to_voice_model": "tts-1",
|
||||
"tts_voice_id": "alloy",
|
||||
# baidu 语音api配置, 使用百度语音识别和语音合成时需要
|
||||
@@ -242,7 +246,7 @@ def drag_sensitive(config):
|
||||
conf_dict_copy = copy.deepcopy(conf_dict)
|
||||
for key in conf_dict_copy:
|
||||
if "key" in key or "secret" in key:
|
||||
if isinstance(key, str):
|
||||
if isinstance(conf_dict_copy[key], str):
|
||||
conf_dict_copy[key] = conf_dict_copy[key][0:3] + "*" * 5 + conf_dict_copy[key][-3:]
|
||||
return json.dumps(conf_dict_copy, indent=4)
|
||||
|
||||
@@ -250,7 +254,7 @@ def drag_sensitive(config):
|
||||
config_copy = copy.deepcopy(config)
|
||||
for key in config:
|
||||
if "key" in key or "secret" in key:
|
||||
if isinstance(key, str):
|
||||
if isinstance(config_copy[key], str):
|
||||
config_copy[key] = config_copy[key][0:3] + "*" * 5 + config_copy[key][-3:]
|
||||
return config_copy
|
||||
except Exception as e:
|
||||
|
||||
@@ -6,6 +6,7 @@ services:
|
||||
security_opt:
|
||||
- seccomp:unconfined
|
||||
environment:
|
||||
TZ: 'Asia/Shanghai'
|
||||
OPEN_AI_API_KEY: 'YOUR API KEY'
|
||||
MODEL: 'gpt-3.5-turbo'
|
||||
PROXY: ''
|
||||
|
||||
@@ -10,9 +10,7 @@
|
||||
},
|
||||
"tool": {
|
||||
"tools": [
|
||||
"python",
|
||||
"url-get",
|
||||
"terminal",
|
||||
"meteo-weather"
|
||||
],
|
||||
"kwargs": {
|
||||
|
||||
@@ -55,7 +55,7 @@ class Keyword(Plugin):
|
||||
reply_text = self.keyword[content]
|
||||
|
||||
# 判断匹配内容的类型
|
||||
if (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".jpg", ".jpeg", ".png", ".gif", ".img"]):
|
||||
if (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".jpg", ".webp", ".jpeg", ".png", ".gif", ".img"]):
|
||||
# 如果是以 http:// 或 https:// 开头,且".jpg", ".jpeg", ".png", ".gif", ".img"结尾,则认为是图片 URL。
|
||||
reply = Reply()
|
||||
reply.type = ReplyType.IMAGE_URL
|
||||
|
||||
@@ -18,6 +18,7 @@ class Plugin:
|
||||
if not plugin_conf:
|
||||
# 全局配置不存在,则获取插件目录下的配置
|
||||
plugin_config_path = os.path.join(self.path, "config.json")
|
||||
logger.debug(f"loading plugin config, plugin_config_path={plugin_config_path}, exist={os.path.exists(plugin_config_path)}")
|
||||
if os.path.exists(plugin_config_path):
|
||||
with open(plugin_config_path, "r", encoding="utf-8") as f:
|
||||
plugin_conf = json.load(f)
|
||||
|
||||
@@ -99,7 +99,8 @@ class Role(Plugin):
|
||||
if e_context["context"].type != ContextType.TEXT:
|
||||
return
|
||||
btype = Bridge().get_bot_type("chat")
|
||||
if btype not in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI]:
|
||||
if btype not in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.QWEN_DASHSCOPE, const.XUNFEI, const.BAIDU, const.ZHIPU_AI, const.MOONSHOT, const.MiniMax, const.LINKAI]:
|
||||
logger.debug(f'不支持的bot: {btype}')
|
||||
return
|
||||
bot = Bridge().get_bot("chat")
|
||||
content = e_context["context"].content[:]
|
||||
|
||||
+9
-1
@@ -22,7 +22,7 @@
|
||||
},
|
||||
"pictureChange": {
|
||||
"url": "https://github.com/Yanyutin753/pictureChange.git",
|
||||
"desc": "利用stable-diffusion和百度Ai进行图生图或者画图的插件"
|
||||
"desc": "1. 支持百度AI和Stable Diffusion WebUI进行图像处理,提供多种模型选择,支持图生图、文生图自定义模板。2. 支持Suno音乐AI可将图像和文字转为音乐。3. 支持自定义模型进行文件、图片总结功能。4. 支持管理员控制群聊内容与参数和功能改变。"
|
||||
},
|
||||
"Blackroom": {
|
||||
"url": "https://github.com/dividduang/blackroom.git",
|
||||
@@ -31,6 +31,14 @@
|
||||
"midjourney": {
|
||||
"url": "https://github.com/baojingyu/midjourney.git",
|
||||
"desc": "利用midjourney实现ai绘图的的插件"
|
||||
},
|
||||
"solitaire": {
|
||||
"url": "https://github.com/Wang-zhechao/solitaire.git",
|
||||
"desc": "机器人微信接龙插件"
|
||||
},
|
||||
"HighSpeedTicket": {
|
||||
"url": "https://github.com/He0607/HighSpeedTicket.git",
|
||||
"desc": "高铁(火车)票查询插件"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
{
|
||||
"tools": [
|
||||
"python",
|
||||
"url-get",
|
||||
"terminal",
|
||||
"meteo"
|
||||
],
|
||||
"kwargs": {
|
||||
|
||||
@@ -22,11 +22,13 @@ class Tool(Plugin):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context
|
||||
|
||||
self.app = self._reset_app()
|
||||
|
||||
if not self.tool_config.get("tools"):
|
||||
logger.warn("[tool] init failed, ignore ")
|
||||
raise Exception("config.json not found")
|
||||
logger.info("[tool] inited")
|
||||
|
||||
|
||||
def get_help_text(self, verbose=False, **kwargs):
|
||||
help_text = "这是一个能让chatgpt联网,搜索,数字运算的插件,将赋予强大且丰富的扩展能力。"
|
||||
trigger_prefix = conf().get("plugin_trigger_prefix", "$")
|
||||
|
||||
@@ -0,0 +1,192 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
# 颜色定义
|
||||
RED='\033[0;31m' # 红色
|
||||
GREEN='\033[0;32m' # 绿色
|
||||
YELLOW='\033[0;33m' # 黄色
|
||||
BLUE='\033[0;34m' # 蓝色
|
||||
NC='\033[0m' # 无颜色
|
||||
|
||||
# 获取当前脚本的目录
|
||||
export BASE_DIR=$(cd "$(dirname "$0")"; pwd)
|
||||
echo -e "${GREEN}📁 BASE_DIR: ${BASE_DIR}${NC}"
|
||||
|
||||
# 检查 config.json 文件是否存在
|
||||
check_config_file() {
|
||||
if [ ! -f "${BASE_DIR}/config.json" ]; then
|
||||
echo -e "${RED}❌ 错误:未找到 config.json 文件。请确保 config.json 存在于当前目录。${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# 检查 Python 版本是否大于等于 3.7,并检查 pip 是否可用
|
||||
check_python_version() {
|
||||
if ! command -v python3 &> /dev/null; then
|
||||
echo -e "${RED}❌ 错误:未找到 Python3。请安装 Python 3.7 或以上版本。${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
PYTHON_VERSION=$(python3 -c 'import sys; print(f"{sys.version_info.major}.{sys.version_info.minor}")')
|
||||
PYTHON_MAJOR=$(echo "$PYTHON_VERSION" | cut -d'.' -f1)
|
||||
PYTHON_MINOR=$(echo "$PYTHON_VERSION" | cut -d'.' -f2)
|
||||
|
||||
if (( PYTHON_MAJOR < 3 || (PYTHON_MAJOR == 3 && PYTHON_MINOR < 7) )); then
|
||||
echo -e "${RED}❌ 错误:Python 版本为 ${PYTHON_VERSION}。请安装 Python 3.7 或以上版本。${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! python3 -m pip --version &> /dev/null; then
|
||||
echo -e "${RED}❌ 错误:未找到 pip。请安装 pip。${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# 检查并安装缺失的依赖
|
||||
install_dependencies() {
|
||||
echo -e "${YELLOW}⏳ 正在安装依赖...${NC}"
|
||||
|
||||
if [ ! -f "${BASE_DIR}/requirements.txt" ]; then
|
||||
echo -e "${RED}❌ 错误:未找到 requirements.txt 文件。${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 安装 requirements.txt 中的依赖,使用清华大学的 PyPI 镜像
|
||||
pip3 install -r "${BASE_DIR}/requirements.txt" -i https://pypi.tuna.tsinghua.edu.cn/simple
|
||||
|
||||
# 处理 requirements-optional.txt(如果存在)
|
||||
if [ -f "${BASE_DIR}/requirements-optional.txt" ]; then
|
||||
echo -e "${YELLOW}⏳ 正在安装可选的依赖...${NC}"
|
||||
pip3 install -r "${BASE_DIR}/requirements-optional.txt" -i https://pypi.tuna.tsinghua.edu.cn/simple
|
||||
fi
|
||||
}
|
||||
|
||||
# 启动项目
|
||||
run_project() {
|
||||
echo -e "${GREEN}🚀 准备启动项目...${NC}"
|
||||
cd "${BASE_DIR}"
|
||||
sleep 2
|
||||
|
||||
|
||||
# 判断操作系统类型
|
||||
OS_TYPE=$(uname)
|
||||
|
||||
if [[ "$OS_TYPE" == "Linux" ]]; then
|
||||
# 在 Linux 上使用 setsid
|
||||
setsid python3 "${BASE_DIR}/app.py" > "${BASE_DIR}/nohup.out" 2>&1 &
|
||||
echo -e "${GREEN}🚀 正在启动 ChatGPT-on-WeChat (Linux)...${NC}"
|
||||
elif [[ "$OS_TYPE" == "Darwin" ]]; then
|
||||
# 在 macOS 上直接运行
|
||||
python3 "${BASE_DIR}/app.py" > "${BASE_DIR}/nohup.out" 2>&1 &
|
||||
echo -e "${GREEN}🚀 正在启动 ChatGPT-on-WeChat (macOS)...${NC}"
|
||||
else
|
||||
echo -e "${RED}❌ 错误:不支持的操作系统 ${OS_TYPE}。${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
sleep 2
|
||||
# 显示日志输出,供用户扫码
|
||||
tail -n 30 -f "${BASE_DIR}/nohup.out"
|
||||
|
||||
}
|
||||
# 更新项目
|
||||
update_project() {
|
||||
echo -e "${GREEN}🔄 准备更新项目,现在停止项目...${NC}"
|
||||
cd "${BASE_DIR}"
|
||||
|
||||
# 停止项目
|
||||
stop_project
|
||||
echo -e "${GREEN}🔄 开始更新项目...${NC}"
|
||||
# 更新代码,从 git 仓库拉取最新代码
|
||||
if [ -d .git ]; then
|
||||
GIT_PULL_OUTPUT=$(git pull)
|
||||
if [ $? -eq 0 ]; then
|
||||
if [[ "$GIT_PULL_OUTPUT" == *"Already up to date."* ]]; then
|
||||
echo -e "${GREEN}✅ 代码已经是最新的。${NC}"
|
||||
else
|
||||
echo -e "${GREEN}✅ 代码更新完成。${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${YELLOW}⚠️ 从 GitHub 更新失败,尝试切换到 Gitee 仓库...${NC}"
|
||||
# 更改远程仓库为 Gitee
|
||||
git remote set-url origin https://gitee.com/zhayujie/chatgpt-on-wechat.git
|
||||
GIT_PULL_OUTPUT=$(git pull)
|
||||
if [ $? -eq 0 ]; then
|
||||
if [[ "$GIT_PULL_OUTPUT" == *"Already up to date."* ]]; then
|
||||
echo -e "${GREEN}✅ 代码已经是最新的。${NC}"
|
||||
else
|
||||
echo -e "${GREEN}✅ 从 Gitee 更新成功。${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}❌ 错误:从 Gitee 更新仍然失败,请检查网络连接。${NC}"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}❌ 错误:当前目录不是 git 仓库,无法更新代码。${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 安装依赖
|
||||
install_dependencies
|
||||
|
||||
# 启动项目
|
||||
run_project
|
||||
}
|
||||
|
||||
# 停止项目
|
||||
stop_project() {
|
||||
echo -e "${GREEN}🛑 正在停止项目...${NC}"
|
||||
cd "${BASE_DIR}"
|
||||
pid=$(ps ax | grep -i app.py | grep "${BASE_DIR}" | grep python3 | grep -v grep | awk '{print $1}')
|
||||
if [ -z "$pid" ] ; then
|
||||
echo -e "${YELLOW}⚠️ 未找到正在运行的 ChatGPT-on-WeChat。${NC}"
|
||||
return
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}🛑 正在运行的 ChatGPT-on-WeChat (PID: ${pid})${NC}"
|
||||
|
||||
kill ${pid}
|
||||
sleep 3
|
||||
|
||||
if ps -p $pid > /dev/null; then
|
||||
echo -e "${YELLOW}⚠️ 进程未停止,尝试强制终止...${NC}"
|
||||
kill -9 ${pid}
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}✅ 已停止 ChatGPT-on-WeChat (PID: ${pid})${NC}"
|
||||
}
|
||||
|
||||
# 主函数,根据用户参数执行操作
|
||||
case "$1" in
|
||||
start)
|
||||
check_config_file
|
||||
check_python_version
|
||||
run_project
|
||||
;;
|
||||
stop)
|
||||
stop_project
|
||||
;;
|
||||
restart)
|
||||
stop_project
|
||||
check_config_file
|
||||
check_python_version
|
||||
run_project
|
||||
;;
|
||||
update)
|
||||
check_config_file
|
||||
check_python_version
|
||||
update_project
|
||||
;;
|
||||
*)
|
||||
echo -e "${YELLOW}=========================================${NC}"
|
||||
echo -e "${YELLOW}用法:${GREEN}$0 ${BLUE}{start|stop|restart|update}${NC}"
|
||||
echo -e "${YELLOW}示例:${NC}"
|
||||
echo -e " ${GREEN}$0 ${BLUE}start${NC}"
|
||||
echo -e " ${GREEN}$0 ${BLUE}stop${NC}"
|
||||
echo -e " ${GREEN}$0 ${BLUE}restart${NC}"
|
||||
echo -e " ${GREEN}$0 ${BLUE}update${NC}"
|
||||
echo -e "${YELLOW}=========================================${NC}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
@@ -8,6 +8,7 @@ Description:
|
||||
|
||||
"""
|
||||
|
||||
import http.client
|
||||
import json
|
||||
import time
|
||||
import requests
|
||||
@@ -61,6 +62,69 @@ def text_to_speech_aliyun(url, text, appkey, token):
|
||||
|
||||
return output_file
|
||||
|
||||
def speech_to_text_aliyun(url, audioContent, appkey, token):
|
||||
"""
|
||||
使用阿里云的语音识别服务识别音频文件中的语音。
|
||||
|
||||
参数:
|
||||
- url (str): 阿里云语音识别服务的端点URL。
|
||||
- audioContent (byte): pcm音频数据。
|
||||
- appkey (str): 您的阿里云appkey。
|
||||
- token (str): 阿里云API的认证令牌。
|
||||
|
||||
返回值:
|
||||
- str: 成功时输出识别到的文本,否则为None。
|
||||
"""
|
||||
format = 'pcm'
|
||||
sample_rate = 16000
|
||||
enablePunctuationPrediction = True
|
||||
enableInverseTextNormalization = True
|
||||
enableVoiceDetection = False
|
||||
|
||||
# 设置RESTful请求参数
|
||||
request = url + '?appkey=' + appkey
|
||||
request = request + '&format=' + format
|
||||
request = request + '&sample_rate=' + str(sample_rate)
|
||||
|
||||
if enablePunctuationPrediction :
|
||||
request = request + '&enable_punctuation_prediction=' + 'true'
|
||||
|
||||
if enableInverseTextNormalization :
|
||||
request = request + '&enable_inverse_text_normalization=' + 'true'
|
||||
|
||||
if enableVoiceDetection :
|
||||
request = request + '&enable_voice_detection=' + 'true'
|
||||
|
||||
host = 'nls-gateway-cn-shanghai.aliyuncs.com'
|
||||
|
||||
# 设置HTTPS请求头部
|
||||
httpHeaders = {
|
||||
'X-NLS-Token': token,
|
||||
'Content-type': 'application/octet-stream',
|
||||
'Content-Length': len(audioContent)
|
||||
}
|
||||
|
||||
conn = http.client.HTTPSConnection(host)
|
||||
conn.request(method='POST', url=request, body=audioContent, headers=httpHeaders)
|
||||
|
||||
response = conn.getresponse()
|
||||
body = response.read()
|
||||
try:
|
||||
body = json.loads(body)
|
||||
status = body['status']
|
||||
if status == 20000000 :
|
||||
result = body['result']
|
||||
if result :
|
||||
logger.info(f"阿里云语音识别到了:{result}")
|
||||
conn.close()
|
||||
return result
|
||||
else :
|
||||
logger.error(f"语音识别失败,状态码: {status}")
|
||||
except ValueError:
|
||||
logger.error(f"语音识别失败,收到非JSON格式的数据: {body}")
|
||||
conn.close()
|
||||
return None
|
||||
|
||||
|
||||
class AliyunTokenGenerator:
|
||||
"""
|
||||
|
||||
+24
-4
@@ -15,9 +15,9 @@ import time
|
||||
|
||||
from bridge.reply import Reply, ReplyType
|
||||
from common.log import logger
|
||||
from voice.audio_convert import get_pcm_from_wav
|
||||
from voice.voice import Voice
|
||||
from voice.ali.ali_api import AliyunTokenGenerator
|
||||
from voice.ali.ali_api import text_to_speech_aliyun
|
||||
from voice.ali.ali_api import AliyunTokenGenerator, speech_to_text_aliyun, text_to_speech_aliyun
|
||||
from config import conf
|
||||
|
||||
|
||||
@@ -34,7 +34,8 @@ class AliVoice(Voice):
|
||||
self.token = None
|
||||
self.token_expire_time = 0
|
||||
# 默认复用阿里云千问的 access_key 和 access_secret
|
||||
self.api_url = config.get("api_url")
|
||||
self.api_url_voice_to_text = config.get("api_url_voice_to_text")
|
||||
self.api_url_text_to_voice = config.get("api_url_text_to_voice")
|
||||
self.app_key = config.get("app_key")
|
||||
self.access_key_id = conf().get("qwen_access_key_id") or config.get("access_key_id")
|
||||
self.access_key_secret = conf().get("qwen_access_key_secret") or config.get("access_key_secret")
|
||||
@@ -53,7 +54,7 @@ class AliVoice(Voice):
|
||||
r'äöüÄÖÜáéíóúÁÉÍÓÚàèìòùÀÈÌÒÙâêîôûÂÊÎÔÛçÇñÑ,。!?,.]', '', text)
|
||||
# 提取有效的token
|
||||
token_id = self.get_valid_token()
|
||||
fileName = text_to_speech_aliyun(self.api_url, text, self.app_key, token_id)
|
||||
fileName = text_to_speech_aliyun(self.api_url_text_to_voice, text, self.app_key, token_id)
|
||||
if fileName:
|
||||
logger.info("[Ali] textToVoice text={} voice file name={}".format(text, fileName))
|
||||
reply = Reply(ReplyType.VOICE, fileName)
|
||||
@@ -61,6 +62,25 @@ class AliVoice(Voice):
|
||||
reply = Reply(ReplyType.ERROR, "抱歉,语音合成失败")
|
||||
return reply
|
||||
|
||||
def voiceToText(self, voice_file):
|
||||
"""
|
||||
将语音文件转换为文本。
|
||||
|
||||
:param voice_file: 要转换的语音文件。
|
||||
:return: 返回一个Reply对象,其中包含转换得到的文本或错误信息。
|
||||
"""
|
||||
# 提取有效的token
|
||||
token_id = self.get_valid_token()
|
||||
logger.debug("[Ali] voice file name={}".format(voice_file))
|
||||
pcm = get_pcm_from_wav(voice_file)
|
||||
text = speech_to_text_aliyun(self.api_url_voice_to_text, pcm, self.app_key, token_id)
|
||||
if text:
|
||||
logger.info("[Ali] VoicetoText = {}".format(text))
|
||||
reply = Reply(ReplyType.TEXT, text)
|
||||
else:
|
||||
reply = Reply(ReplyType.ERROR, "抱歉,语音识别失败")
|
||||
return reply
|
||||
|
||||
def get_valid_token(self):
|
||||
"""
|
||||
获取有效的阿里云token。
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
{
|
||||
"api_url": "https://nls-gateway-cn-shanghai.aliyuncs.com/stream/v1/tts",
|
||||
"api_url_text_to_voice": "https://nls-gateway-cn-shanghai.aliyuncs.com/stream/v1/tts",
|
||||
"api_url_voice_to_text": "https://nls-gateway.cn-shanghai.aliyuncs.com/stream/v1/asr",
|
||||
"app_key": "",
|
||||
"access_key_id": "",
|
||||
"access_key_secret": ""
|
||||
|
||||
@@ -65,7 +65,7 @@ class AzureVoice(Voice):
|
||||
reply = Reply(ReplyType.TEXT, result.text)
|
||||
else:
|
||||
cancel_details = result.cancellation_details
|
||||
logger.error("[Azure] voiceToText error, result={}, errordetails={}".format(result, cancel_details.error_details))
|
||||
logger.error("[Azure] voiceToText error, result={}, errordetails={}".format(result, cancel_details))
|
||||
reply = Reply(ReplyType.ERROR, "抱歉,语音识别失败")
|
||||
return reply
|
||||
|
||||
|
||||
Reference in New Issue
Block a user