Compare commits

..

3 Commits

Author SHA1 Message Date
Claude 4c2f6485d8 docs: add AGENTS.md with codebase guide for AI assistants
Documents the Tauri 2 + React architecture, repo layout, service/command
layering, data locations, development workflow, testing strategy, and key
conventions (camelCase IPC, i18n, commit style) so AI coding assistants can
contribute without re-discovering the codebase each session.
2026-04-08 11:43:37 +00:00
Cod1ng dc4524e960 fix: handle UTF-8 multi-byte characters split across stream chunk boundaries (#1923)
* fix: handle UTF-8 multi-byte characters split across stream chunk boundaries

Replace String::from_utf8_lossy with append_utf8_safe in all four SSE
streaming paths. When a multi-byte UTF-8 character (e.g. Chinese, emoji)
is split across TCP chunk boundaries, from_utf8_lossy silently replaces
the incomplete halves with U+FFFD (�). This caused intermittent garbled
output in Claude Code when using the Copilot reverse proxy, because the
format conversion streams reconstruct SSE events from the corrupted buffer.

The new append_utf8_safe function preserves incomplete trailing bytes in
a remainder buffer and merges them with the next chunk before decoding,
ensuring characters are never split during UTF-8 conversion.

Fixes: intermittent U+FFFD replacement characters in Claude Code output
via Copilot proxy (not reproducible with direct Copilot connections like
opencode because they pass through raw bytes without format conversion).

* style: fix cargo fmt formatting in UTF-8 boundary tests

---------

Co-authored-by: Cod1ng <codingts@gmail.com>
Co-authored-by: encodets <encodets@gmail.com>
2026-04-08 10:02:35 +08:00
Dex Miller 34f16886a2 Normalize fragmented system prompts for strict chat backends (#1942)
Some OpenAI-compatible chat providers reject requests when Claude-side\nsystem fragments arrive as multiple system messages. Normalize the\nconverted OpenAI chat payload so system content becomes a single\nleading system message while leaving the rest of the message stream\nunchanged.\n\nConstraint: Nvidia/Qwen-style chat completions require a single leading system prompt\nRejected: Reorder system messages only | still leaves fragmented system prompts for strict backends\nConfidence: high\nScope-risk: narrow\nReversibility: clean\nDirective: Keep OpenAI chat system prompts normalized unless a provider explicitly requires fragmented system messages\nTested: cargo test proxy::providers::transform --manifest-path src-tauri/Cargo.toml\nNot-tested: Full end-to-end proxy capture against Nvidia upstream in this session\nRelated: #1881
2026-04-08 09:41:20 +08:00
12 changed files with 829 additions and 216 deletions
+364
View File
@@ -0,0 +1,364 @@
# AGENTS.md
Guidance for AI coding assistants (Claude Code, Codex, Gemini CLI, …) working in this
repository. Claude Code reads this file automatically; a local `CLAUDE.md` (gitignored
per `.gitignore`) may override or extend it per-developer.
## Project Overview
**CC Switch** is a cross-platform Tauri 2 desktop application that manages configurations
for multiple AI coding CLIs: **Claude Code, Codex, Gemini CLI, OpenCode, and OpenClaw**.
It provides provider switching, unified MCP/Prompts/Skills management, a local proxy with
failover, usage tracking, session browsing, and cloud sync — all backed by a SQLite SSOT.
- **Frontend**: React 18 + TypeScript + Vite + TailwindCSS 3.4 + shadcn/ui
- **Backend**: Rust (Tauri 2.8) with SQLite (`rusqlite`) persistence
- **State/cache**: TanStack Query v5 on the frontend; `Mutex<Connection>` on the backend
- **IPC**: Tauri commands (camelCase names) wrapped by a typed frontend API layer
- **i18n**: `react-i18next` with `zh` / `en` / `ja` locales (Chinese is the primary UI language)
## Repository Layout
```
├── src/ # Frontend (React + TypeScript)
│ ├── App.tsx # Root shell — view routing, headers, dialogs
│ ├── main.tsx # Bootstrap, providers, config-error handling
│ ├── components/
│ │ ├── providers/ # Provider CRUD (cards, forms, dialogs)
│ │ ├── mcp/ # Unified MCP panel + wizard
│ │ ├── prompts/ # Prompts panel (Markdown editor)
│ │ ├── skills/ # Skills install/management + repo manager
│ │ ├── sessions/ # Session manager (history browser)
│ │ ├── proxy/ # Proxy + failover panels
│ │ ├── openclaw/ # OpenClaw-specific config panels
│ │ ├── settings/ # Settings pages (theme, dir, webdav, proxy, about…)
│ │ ├── deeplink/ # ccswitch:// import confirmation dialogs
│ │ ├── env/ # Env conflict warning banner
│ │ ├── universal/ # Cross-app (universal) provider UI
│ │ ├── usage/ # Usage dashboard, charts, pricing
│ │ ├── workspace/ # OpenClaw workspace/agent file editor
│ │ └── ui/ # shadcn/ui primitives (button, dialog, ...)
│ ├── hooks/ # Custom React hooks (business logic glue)
│ ├── lib/
│ │ ├── api/ # Typed Tauri IPC wrappers (one module per domain)
│ │ ├── query/ # TanStack Query config + query keys
│ │ ├── schemas/ # Zod schemas (provider/mcp/settings/common)
│ │ ├── errors/ # Error parsing helpers
│ │ ├── utils/ # Small helpers (base64, ...)
│ │ ├── authBinding.ts # Auth binding helpers
│ │ ├── clipboard.ts # Clipboard utils
│ │ ├── platform.ts # OS detection (isMac/isWin/isLinux)
│ │ └── updater.ts # Updater helpers
│ ├── contexts/UpdateContext.tsx
│ ├── i18n/ # i18next init + locales (en/zh/ja)
│ ├── config/ # Static presets (providers, mcp)
│ ├── icons/ # Provider icon index
│ ├── types.ts, types/ # Shared TypeScript types
│ └── utils/ # DOM/error helpers
├── src-tauri/ # Backend (Rust + Tauri 2)
│ ├── Cargo.toml # rust-version = 1.85
│ ├── tauri.conf.json # Deep link, updater, bundling config
│ ├── capabilities/ # Tauri permission manifests
│ └── src/
│ ├── lib.rs # App entry, tray, deep-link, setup
│ ├── main.rs # Binary entry delegating to lib
│ ├── commands/ # Tauri #[command] layer (by domain, mod.rs re-exports *)
│ │ # auth, provider, mcp, prompt, skill, proxy,
│ │ # session_manager, settings, usage, webdav_sync, …
│ ├── services/ # Business-logic layer
│ │ ├── provider/ # ProviderService (CRUD, switch, live sync, auth, usage)
│ │ ├── mcp.rs # McpService
│ │ ├── prompt.rs # PromptService
│ │ ├── skill.rs # SkillService
│ │ ├── proxy.rs # ProxyService (hot-switching local proxy)
│ │ ├── config.rs # ConfigService (import/export, backups)
│ │ ├── speedtest.rs # Endpoint latency
│ │ ├── webdav*.rs # WebDAV sync engine + auto-sync
│ │ └── usage_stats.rs # Usage aggregation
│ ├── database/
│ │ ├── mod.rs # Database struct, Mutex<Connection>, hooks
│ │ ├── schema.rs # Schema + migration (SCHEMA_VERSION = 6)
│ │ ├── migration.rs # JSON → SQLite migration
│ │ ├── backup.rs # Snapshot + SQL export
│ │ └── dao/ # providers, mcp, prompts, skills, settings, proxy,
│ │ # failover, stream_check, usage_rollup, universal_providers
│ ├── proxy/ # Local HTTP proxy (forwarder, circuit breaker, SSE,
│ │ # failover, model mapping, thinking rectifier, …)
│ ├── mcp/ # MCP live-file sync per app
│ ├── session_manager/ # Conversation history browser
│ ├── deeplink/ # ccswitch:// URL parser + importer
│ ├── store.rs # AppState (Arc<Database>, caches)
│ ├── config.rs # Paths helper (get_app_config_dir, …)
│ ├── app_config.rs # AppType, MultiAppConfig, domain models
│ ├── provider.rs # Provider model
│ ├── {claude,codex,gemini,opencode,openclaw}_config.rs # Per-app live-file IO
│ ├── {claude_mcp,claude_plugin,gemini_mcp}.rs # App-specific helpers
│ ├── settings.rs # AppSettings
│ ├── tray.rs # System tray + quick switch
│ ├── error.rs # AppError (thiserror)
│ ├── panic_hook.rs
│ └── ...
│ └── tests/ # Rust integration tests (provider, mcp, deeplink, skill, …)
├── tests/ # Frontend test suite (vitest)
│ ├── setupGlobals.ts, setupTests.ts
│ ├── msw/ # MSW handlers + tauri IPC mocks + state
│ ├── components/ # Component tests
│ ├── hooks/ # Hook tests
│ ├── integration/ # App-level flows
│ ├── config/ # Preset sanity tests
│ └── utils/ # testQueryClient + helpers
├── docs/ # User manual, release notes, proxy guide
├── scripts/ # Icon extraction & index generation
├── assets/ # Screenshots, partner logos
├── flatpak/ # Flatpak build instructions
├── package.json # pnpm scripts (dev/build/typecheck/test/format)
├── vite.config.ts # root = src, alias @ → src
├── vitest.config.ts # jsdom + setup files
├── tsconfig.json # strict; noUnusedLocals/Parameters
├── tailwind.config.cjs, postcss.config.cjs, components.json (shadcn)
└── README.md / README_ZH.md / README_JA.md / CHANGELOG.md / CONTRIBUTING.md
```
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Frontend (React + TS) │
│ Components → Hooks (business logic) → TanStack Query │
│ │ │
│ src/lib/api/* (typed invoke wrappers) │
└────────────────────────────┬────────────────────────────────┘
│ Tauri IPC (camelCase commands)
┌────────────────────────────▼────────────────────────────────┐
│ Backend (Rust + Tauri 2.8) │
│ commands/* (#[tauri::command]) │
│ │ │
│ ▼ │
│ services/* (ProviderService, McpService, PromptService, │
│ SkillService, ProxyService, ConfigService…) │
│ │ │
│ ▼ │
│ database/dao/* → Mutex<rusqlite::Connection> │
│ │
│ + per-app live-file writers (claude/codex/gemini/…) │
│ + proxy/ (hyper + rustls local HTTP proxy) │
│ + session_manager/, deeplink/, mcp/, tray, updater │
└─────────────────────────────────────────────────────────────┘
```
### Core Design Principles
- **Single Source of Truth (SSOT)** — SQLite at `~/.cc-switch/cc-switch.db` holds providers,
MCP, prompts, skills, settings. Syncable state lives in the DB; device-level UI
preferences live in `~/.cc-switch/settings.json`.
- **Dual-way live sync** — On switch, services write the active provider into the CLI's
real config files (e.g. `~/.claude/settings.json`, `~/.codex/config.toml`). When editing
the currently active provider, changes are backfilled from the live file first to avoid
losing edits the user made outside the app.
- **Atomic writes** — Write to a temp file and rename. Never overwrite a live config
in-place.
- **Concurrency safety** — `Database` wraps `rusqlite::Connection` in a `Mutex`, exposed
through `AppState` as `Arc<Database>`. Use the `lock_conn!` macro (see
`src-tauri/src/database/mod.rs`) instead of raw `.lock().unwrap()`.
- **Layered backend** — `commands → services → dao → database`. Commands must stay thin;
put business logic in services. DAOs are the only layer that touches SQL.
- **Auto backups** — `~/.cc-switch/backups/` keeps the 10 most recent snapshots;
`~/.cc-switch/skill-backups/` keeps up to 20 before skill uninstall.
### Key Services
| Service | Responsibility |
| ------------------ | ----------------------------------------------------------------------- |
| `ProviderService` | Provider CRUD, switching, live-file sync, backfill, sort, auth, usage |
| `McpService` | MCP server CRUD + bidirectional sync across Claude/Codex/Gemini/OpenCode|
| `PromptService` | Prompt presets, active sync to `CLAUDE.md` / `AGENTS.md` / `GEMINI.md` |
| `SkillService` | Skill install from GitHub/ZIP, symlink or copy mode, repo management |
| `ProxyService` | Local HTTP proxy (hyper+rustls) with hot-switch, failover, rectifiers |
| `ConfigService` | Import/export, backup rotation |
| `SpeedtestService` | API endpoint latency probing |
### Data Locations
- `~/.cc-switch/cc-switch.db` — SQLite SSOT (schema version 6)
- `~/.cc-switch/settings.json` — device-level UI preferences
- `~/.cc-switch/backups/` — auto-rotated DB snapshots (keeps 10)
- `~/.cc-switch/skills/` — skills (symlinked into each app by default)
- `~/.cc-switch/skill-backups/` — pre-uninstall skill backups (keeps 20)
## Development Workflow
### Prerequisites
- **Node.js 22.12** (see `.node-version`) — 18+ works but CI pins 20
- **pnpm 10.12.3** (pinned in CI; pnpm-workspace)
- **Rust 1.85+** (pinned in `Cargo.toml`)
- **Tauri 2.0 system deps** — see https://v2.tauri.app/start/prerequisites/
### Common Commands
```bash
pnpm install # Install frontend deps
pnpm dev # Run full app (tauri dev with hot reload)
pnpm dev:renderer # Vite-only (no Tauri shell) — useful for UI-only work
pnpm build # Production build (tauri build)
pnpm typecheck # tsc --noEmit (strict)
pnpm format # Prettier write on src/**
pnpm format:check # Prettier check (CI)
pnpm test:unit # vitest run
pnpm test:unit:watch # vitest in watch mode
```
Rust backend (from `src-tauri/`):
```bash
cargo fmt # Format
cargo fmt --check # CI format check
cargo clippy -- -D warnings
cargo test # Backend + integration tests
cargo test --features test-hooks
```
### Pre-submission Checklist
CI will run these; run locally before opening a PR:
```bash
pnpm typecheck && pnpm format:check && pnpm test:unit
cd src-tauri && cargo fmt --check && cargo clippy -- -D warnings && cargo test
```
### Testing
- **Frontend**: `vitest` + `jsdom` + `@testing-library/react`. Tauri `invoke` is mocked via
`tests/msw/tauriMocks.ts`; network requests are mocked with MSW. Shared state
(providers etc.) is reset between tests in `tests/setupTests.ts`.
- **Test query client**: use `tests/utils/testQueryClient.ts` instead of the app client —
it disables retries/cache for deterministic tests.
- **Backend**: integration tests live in `src-tauri/tests/`; unit tests are co-located in
modules. Many tests use `serial_test::serial` because they mutate `HOME`/env — do not
run them with parallelism hacks, and don't remove the `#[serial]` attribute.
- **Rust test-only hooks**: the `test-hooks` cargo feature gates extra test instrumentation.
### CI (`.github/workflows/ci.yml`)
Two jobs on PRs and pushes to `main`:
1. **Frontend Checks** (ubuntu-latest): `pnpm typecheck`, `pnpm format:check`, `pnpm test:unit`
2. **Backend Checks** (ubuntu-22.04): installs GTK/WebKit deps, then
`cargo fmt --check`, `cargo clippy -- -D warnings`, `cargo test`
## Conventions
### Tauri 2.0 IPC
- **Command names are camelCase** on the JS side (e.g. `getProviders`, `switchProvider`).
On the Rust side, the `#[tauri::command]` functions use snake_case with the
`#![allow(non_snake_case)]` at the crate boundary in `commands/mod.rs`.
- **Never call `invoke` directly in components** — add the call to `src/lib/api/*.ts`
with a typed signature, then import from `@/lib/api`. See `src/lib/api/providers.ts`
for the pattern.
- **Payloads use camelCase**: Rust types carry `#[serde(rename_all = "camelCase")]` where
they cross the IPC boundary.
### Frontend
- **Import alias**: `@/` resolves to `src/` (configured in `vite.config.ts`, `tsconfig.json`,
`vitest.config.ts`). Use `@/components/...`, `@/lib/...`, `@/hooks/...`.
- **Data access**: Prefer TanStack Query hooks from `src/lib/query/` (e.g.
`useProvidersQuery`, `useSettingsQuery`) rather than calling the API layer ad-hoc —
they own cache keys and invalidation.
- **Forms**: `react-hook-form` + `zod` resolvers; schemas live in `src/lib/schemas/`.
- **UI kit**: shadcn/ui primitives under `src/components/ui/`. Configure new primitives
via `components.json` (`npx shadcn add ...`). Icons: `lucide-react`.
- **Styling**: Tailwind utility classes; use the `cn()` helper from `@/lib/utils`.
Dark/light/system theme is controlled by `ThemeProvider`.
- **State strictness**: `noUnusedLocals` and `noUnusedParameters` are on — prefix
intentionally-unused args with `_`.
### Backend (Rust)
- **Errors**: return `Result<T, AppError>` (from `src-tauri/src/error.rs`, built on
`thiserror`). Do not `unwrap()` outside tests; use `?` and map into `AppError`.
- **Concurrency**: never hold a DB lock across an `.await`. Use the `lock_conn!` macro
from `database/mod.rs` for short critical sections.
- **JSON serialization**: use `database::to_json_string` for DB payloads to avoid panics.
- **Live-file IO**: always go through the per-app writer modules
(`claude_config.rs`, `codex_config.rs`, etc.) — they implement atomic temp+rename.
- **Adding a new Tauri command**:
1. Implement logic in the appropriate `services/*` module.
2. Add a thin `#[tauri::command]` wrapper in `src-tauri/src/commands/<domain>.rs`.
3. Register it in the `tauri::generate_handler!` list in `src-tauri/src/lib.rs`.
4. Add the typed wrapper to `src/lib/api/<domain>.ts` and re-export from
`src/lib/api/index.ts`.
5. If it touches DB schema, bump `SCHEMA_VERSION` in `database/mod.rs` and add a
migration step in `database/schema.rs` or `database/migration.rs`.
### Internationalization
CC Switch ships **three locales** and requires all of them to stay in sync:
- `src/i18n/locales/en.json`
- `src/i18n/locales/zh.json` (primary)
- `src/i18n/locales/ja.json`
Rules:
1. Never hardcode user-visible strings. Always use `t('namespace.key')` from
`react-i18next`.
2. When adding/renaming a key, update **all three** files.
3. When removing a key, delete it from all three files.
4. Chinese is the authoritative source for meaning — follow the tone of existing zh
strings when writing new ones.
### Commit Style
[Conventional Commits](https://www.conventionalcommits.org/):
```
feat(provider): add AWS Bedrock preset
fix(tray): resolve menu not refreshing after switch
docs(readme): update install instructions
ci: add format check workflow
chore(deps): bump tauri to 2.8.2
```
Scope should usually match the subsystem (`provider`, `mcp`, `prompt`, `skill`, `proxy`,
`session`, `tray`, `deeplink`, `usage`, `settings`, `i18n`, `backend`, `frontend`, …).
### Pull Requests
- **Open an issue first** for new features — drive-by feature PRs can be closed.
- **Keep PRs small and focused.** One issue, one PR.
- `main` is the base branch; use `feat/…` or `fix/…` branches.
- The repo enforces "explain every line" for AI-assisted PRs — see `CONTRIBUTING.md`.
## Things to Avoid
- **Don't bypass the service/DAO layers.** Commands must not call `rusqlite` directly,
and components must not call `invoke` directly.
- **Don't mutate live CLI config files outside the dedicated writer modules.** They
guarantee atomicity and backfill semantics.
- **Don't add fields to the Tauri IPC boundary without `#[serde(rename_all = "camelCase")]`.**
- **Don't remove `#[serial]` from backend tests that touch HOME / env** — they'll race.
- **Don't add a new i18n key to only one language file** — CI doesn't catch it, but users will.
- **Don't add emojis to source files / commits / UI copy** unless the user explicitly asks.
- **Don't create new top-level docs** (README variants, wiki pages) unless asked — prefer
editing `docs/user-manual/` or the existing README.
- **Don't touch `CHANGELOG.md` by hand** for routine changes — it's maintained per release.
## Quick References
- **Main app shell**: `src/App.tsx` (view routing + header)
- **Bootstrap / providers**: `src/main.tsx`
- **Tauri entry**: `src-tauri/src/lib.rs`
- **Command registration**: search for `tauri::generate_handler!` in `src-tauri/src/lib.rs`
- **DB schema + migrations**: `src-tauri/src/database/schema.rs`,
`src-tauri/src/database/migration.rs`
- **Per-app live config IO**: `src-tauri/src/{claude,codex,gemini,opencode,openclaw}_config.rs`
- **Local proxy**: `src-tauri/src/proxy/` (entry `mod.rs``server.rs`)
- **Frontend API layer**: `src/lib/api/*` re-exported from `src/lib/api/index.ts`
- **Query keys & hooks**: `src/lib/query/`
- **Test IPC mocks**: `tests/msw/tauriMocks.ts` + `tests/msw/state.ts`
+2 -5
View File
@@ -307,12 +307,9 @@ pub async fn testUsageScript(
}
#[tauri::command]
pub fn read_live_provider_settings(
state: State<'_, AppState>,
app: String,
) -> Result<serde_json::Value, String> {
pub fn read_live_provider_settings(app: String) -> Result<serde_json::Value, String> {
let app_type = AppType::from_str(&app).map_err(|e| e.to_string())?;
ProviderService::read_live_settings(&state, app_type).map_err(|e| e.to_string())
ProviderService::read_live_settings(app_type).map_err(|e| e.to_string())
}
#[tauri::command]
-1
View File
@@ -1348,7 +1348,6 @@ fn initialize_common_config_snippets(state: &store::AppState) {
}
let settings = match crate::services::provider::ProviderService::read_live_settings(
state,
app_type.clone(),
) {
Ok(s) => s,
+43 -2
View File
@@ -93,6 +93,7 @@ pub fn create_anthropic_sse_stream<E: std::error::Error + Send + 'static>(
) -> impl Stream<Item = Result<Bytes, std::io::Error>> + Send {
async_stream::stream! {
let mut buffer = String::new();
let mut utf8_remainder: Vec<u8> = Vec::new();
let mut message_id = None;
let mut current_model = None;
let mut next_content_index: u32 = 0;
@@ -107,8 +108,7 @@ pub fn create_anthropic_sse_stream<E: std::error::Error + Send + 'static>(
while let Some(chunk) = stream.next().await {
match chunk {
Ok(bytes) => {
let text = String::from_utf8_lossy(&bytes);
buffer.push_str(&text);
crate::proxy::sse::append_utf8_safe(&mut buffer, &mut utf8_remainder, &bytes);
while let Some(pos) = buffer.find("\n\n") {
let line = buffer[..pos].to_string();
@@ -750,4 +750,45 @@ mod tests {
assert!(deltas.contains(&"{\"a\":"));
assert!(deltas.contains(&"1}"));
}
#[tokio::test]
async fn test_streaming_chinese_split_across_chunks_no_replacement_chars() {
// "你好" split across two TCP chunks inside a streaming text delta.
// Before the fix, from_utf8_lossy would produce U+FFFD for each half.
let full = concat!(
"data: {\"id\":\"chatcmpl_3\",\"model\":\"gpt-4o\",\"choices\":[{\"delta\":{\"content\":\"你好\"}}]}\n\n",
"data: {\"id\":\"chatcmpl_3\",\"model\":\"gpt-4o\",\"choices\":[{\"delta\":{},\"finish_reason\":\"stop\"}],\"usage\":{\"prompt_tokens\":5,\"completion_tokens\":2}}\n\n",
"data: [DONE]\n\n"
);
let bytes = full.as_bytes();
// Find "你" in the byte stream and split inside it
let ni_start = bytes.windows(3).position(|w| w == "".as_bytes()).unwrap();
let split_point = ni_start + 1; // split after first byte of "你"
let chunk1 = Bytes::from(bytes[..split_point].to_vec());
let chunk2 = Bytes::from(bytes[split_point..].to_vec());
let upstream = stream::iter(vec![
Ok::<_, std::io::Error>(chunk1),
Ok::<_, std::io::Error>(chunk2),
]);
let converted = create_anthropic_sse_stream(upstream);
let chunks: Vec<_> = converted.collect().await;
let merged = chunks
.into_iter()
.map(|chunk| String::from_utf8_lossy(chunk.unwrap().as_ref()).to_string())
.collect::<String>();
// Must contain the original Chinese characters, not replacement chars
assert!(
merged.contains("你好"),
"expected '你好' in output, got replacement chars (U+FFFD)"
);
assert!(
!merged.contains('\u{FFFD}'),
"output must not contain U+FFFD replacement characters"
);
}
}
@@ -101,6 +101,7 @@ pub fn create_anthropic_sse_stream_from_responses<E: std::error::Error + Send +
) -> impl Stream<Item = Result<Bytes, std::io::Error>> + Send {
async_stream::stream! {
let mut buffer = String::new();
let mut utf8_remainder: Vec<u8> = Vec::new();
let mut message_id: Option<String> = None;
let mut current_model: Option<String> = None;
let mut has_sent_message_start = false;
@@ -118,8 +119,7 @@ pub fn create_anthropic_sse_stream_from_responses<E: std::error::Error + Send +
while let Some(chunk) = stream.next().await {
match chunk {
Ok(bytes) => {
let text = String::from_utf8_lossy(&bytes);
buffer.push_str(&text);
crate::proxy::sse::append_utf8_safe(&mut buffer, &mut utf8_remainder, &bytes);
// SSE 事件由 \n\n 分隔
while let Some(pos) = buffer.find("\n\n") {
@@ -1029,4 +1029,45 @@ mod tests {
assert_eq!(text_stops, 1);
assert_eq!(text_deltas, vec!["".to_string(), "".to_string()]);
}
#[tokio::test]
async fn test_streaming_responses_chinese_split_across_chunks_no_replacement_chars() {
// Chinese text delta split across two TCP chunks.
let full = concat!(
"event: response.created\n",
"data: {\"type\":\"response.created\",\"response\":{\"id\":\"resp_cn\",\"model\":\"gpt-4o\",\"usage\":{\"input_tokens\":5,\"output_tokens\":0}}}\n\n",
"event: response.output_text.delta\n",
"data: {\"type\":\"response.output_text.delta\",\"delta\":\"你好世界\"}\n\n",
"event: response.completed\n",
"data: {\"type\":\"response.completed\",\"response\":{\"status\":\"completed\",\"usage\":{\"input_tokens\":5,\"output_tokens\":4}}}\n\n"
);
let bytes = full.as_bytes();
// Find "你" and split inside it
let ni_start = bytes.windows(3).position(|w| w == "".as_bytes()).unwrap();
let split_point = ni_start + 2; // split after second byte of "你"
let chunk1 = Bytes::from(bytes[..split_point].to_vec());
let chunk2 = Bytes::from(bytes[split_point..].to_vec());
let upstream = stream::iter(vec![
Ok::<_, std::io::Error>(chunk1),
Ok::<_, std::io::Error>(chunk2),
]);
let converted = create_anthropic_sse_stream_from_responses(upstream);
let chunks: Vec<_> = converted.collect().await;
let merged = chunks
.into_iter()
.map(|c| String::from_utf8_lossy(c.unwrap().as_ref()).to_string())
.collect::<String>();
assert!(
merged.contains("你好世界"),
"expected '你好世界' in output, got replacement chars (U+FFFD)"
);
assert!(
!merged.contains('\u{FFFD}'),
"output must not contain U+FFFD replacement characters"
);
}
}
@@ -113,6 +113,7 @@ pub fn anthropic_to_openai(body: Value, cache_key: Option<&str>) -> Result<Value
}
}
normalize_openai_system_messages(&mut messages);
result["messages"] = json!(messages);
// 转换参数 — o-series 模型需要 max_completion_tokens
@@ -182,6 +183,57 @@ pub fn anthropic_to_openai(body: Value, cache_key: Option<&str>) -> Result<Value
Ok(result)
}
fn normalize_openai_system_messages(messages: &mut Vec<Value>) {
let system_count = messages
.iter()
.filter(|message| message.get("role").and_then(|value| value.as_str()) == Some("system"))
.count();
if system_count == 0 {
return;
}
if system_count == 1 {
if let Some(index) = messages.iter().position(|message| {
message.get("role").and_then(|value| value.as_str()) == Some("system")
}) {
if index > 0 {
let message = messages.remove(index);
messages.insert(0, message);
}
}
return;
}
let mut parts = Vec::new();
messages.retain(|message| {
if message.get("role").and_then(|value| value.as_str()) != Some("system") {
return true;
}
match message.get("content") {
Some(Value::String(text)) if !text.is_empty() => parts.push(text.clone()),
Some(Value::Array(content_parts)) => {
let text = content_parts
.iter()
.filter_map(|part| part.get("text").and_then(|value| value.as_str()))
.collect::<Vec<_>>()
.join("\n");
if !text.is_empty() {
parts.push(text);
}
}
_ => {}
}
false
});
if !parts.is_empty() {
messages.insert(0, json!({"role": "system", "content": parts.join("\n")}));
}
}
/// 转换单条消息到 OpenAI 格式(可能产生多条消息)
fn convert_message_to_openai(
role: &str,
@@ -560,6 +612,31 @@ mod tests {
assert_eq!(result["tools"][0]["function"]["name"], "get_weather");
}
#[test]
fn test_anthropic_to_openai_normalizes_fragmented_system_messages() {
let input = json!({
"model": "claude-3-sonnet",
"max_tokens": 1024,
"system": [
{"type": "text", "text": "You are Claude Code."},
{"type": "text", "text": "Be concise."}
],
"messages": [
{"role": "system", "content": "Follow repo conventions."},
{"role": "user", "content": "Hello"}
]
});
let result = anthropic_to_openai(input, None).unwrap();
assert_eq!(result["messages"].as_array().unwrap().len(), 2);
assert_eq!(result["messages"][0]["role"], "system");
assert_eq!(
result["messages"][0]["content"],
"You are Claude Code.\nBe concise.\nFollow repo conventions."
);
assert_eq!(result["messages"][1]["role"], "user");
}
#[test]
fn test_anthropic_to_openai_tool_use() {
let input = json!({
+2 -2
View File
@@ -71,6 +71,7 @@ impl StreamHandler {
async_stream::stream! {
let mut _last_activity = Instant::now();
let mut buffer = String::new();
let mut utf8_remainder: Vec<u8> = Vec::new();
tokio::pin!(stream);
@@ -82,8 +83,7 @@ impl StreamHandler {
_last_activity = Instant::now();
// 解析 SSE 事件
let text = String::from_utf8_lossy(&bytes);
buffer.push_str(&text);
crate::proxy::sse::append_utf8_safe(&mut buffer, &mut utf8_remainder, &bytes);
// 提取完整事件
while let Some(pos) = buffer.find("\n\n") {
+2 -2
View File
@@ -568,6 +568,7 @@ pub fn create_logged_passthrough_stream(
) -> impl Stream<Item = Result<Bytes, std::io::Error>> + Send {
async_stream::stream! {
let mut buffer = String::new();
let mut utf8_remainder: Vec<u8> = Vec::new();
let mut collector = usage_collector;
let mut is_first_chunk = true;
@@ -619,8 +620,7 @@ pub fn create_logged_passthrough_stream(
);
}
is_first_chunk = false;
let text = String::from_utf8_lossy(&bytes);
buffer.push_str(&text);
crate::proxy::sse::append_utf8_safe(&mut buffer, &mut utf8_remainder, &bytes);
// 尝试解析并记录完整的 SSE 事件
while let Some(pos) = buffer.find("\n\n") {
+274 -1
View File
@@ -4,9 +4,71 @@ pub(crate) fn strip_sse_field<'a>(line: &'a str, field: &str) -> Option<&'a str>
.or_else(|| line.strip_prefix(&format!("{field}:")))
}
/// Append raw bytes to a UTF-8 `String` buffer, correctly handling multi-byte
/// characters that are split across chunk boundaries.
///
/// `remainder` accumulates trailing bytes from the previous chunk that form an
/// incomplete UTF-8 sequence (at most 3 bytes under normal operation). On each
/// call the remainder is prepended to `new_bytes`, the longest valid UTF-8
/// prefix is appended to `buffer`, and any trailing incomplete bytes are saved
/// back into `remainder` for the next call.
///
/// A defensive guard discards `remainder` via lossy conversion if it ever
/// exceeds 3 bytes, which cannot happen with well-formed UTF-8 streams.
pub(crate) fn append_utf8_safe(buffer: &mut String, remainder: &mut Vec<u8>, new_bytes: &[u8]) {
// Build the byte slice to decode: prepend any leftover bytes from previous chunk.
let (owned, bytes): (Option<Vec<u8>>, &[u8]) = if remainder.is_empty() {
(None, new_bytes)
} else {
// Defensive guard: remainder should never exceed 3 bytes (max incomplete
// UTF-8 sequence is 3 bytes: a 4-byte char missing its last byte). If it
// does, the stream is producing genuinely invalid bytes; flush them lossy
// and start fresh.
if remainder.len() > 3 {
buffer.push_str(&String::from_utf8_lossy(remainder));
remainder.clear();
(None, new_bytes)
} else {
let mut combined = std::mem::take(remainder);
combined.extend_from_slice(new_bytes);
(Some(combined), &[])
}
};
let input = owned.as_deref().unwrap_or(bytes);
// Decode loop: consume all valid UTF-8 and any genuinely invalid bytes,
// only leaving a trailing incomplete sequence in remainder.
let mut pos = 0;
loop {
match std::str::from_utf8(&input[pos..]) {
Ok(s) => {
buffer.push_str(s);
// Everything consumed remainder stays empty.
return;
}
Err(e) => {
let valid_up_to = pos + e.valid_up_to();
buffer.push_str(
// Safety: from_utf8 guarantees [pos..valid_up_to] is valid UTF-8.
std::str::from_utf8(&input[pos..valid_up_to]).unwrap(),
);
if let Some(invalid_len) = e.error_len() {
// Genuinely invalid byte(s) emit U+FFFD and continue.
buffer.push('\u{FFFD}');
pos = valid_up_to + invalid_len;
} else {
// Incomplete trailing sequence stash for next chunk.
*remainder = input[valid_up_to..].to_vec();
return;
}
}
}
}
}
#[cfg(test)]
mod tests {
use super::strip_sse_field;
use super::{append_utf8_safe, strip_sse_field};
#[test]
fn strip_sse_field_accepts_optional_space() {
@@ -28,4 +90,215 @@ mod tests {
);
assert_eq!(strip_sse_field("id:1", "data"), None);
}
// ------------------------------------------------------------------
// append_utf8_safe tests
// ------------------------------------------------------------------
#[test]
fn ascii_passthrough() {
let mut buf = String::new();
let mut rem = Vec::new();
append_utf8_safe(&mut buf, &mut rem, b"hello world");
assert_eq!(buf, "hello world");
assert!(rem.is_empty());
}
#[test]
fn complete_multibyte_in_single_chunk() {
let mut buf = String::new();
let mut rem = Vec::new();
append_utf8_safe(&mut buf, &mut rem, "你好世界".as_bytes());
assert_eq!(buf, "你好世界");
assert!(rem.is_empty());
}
#[test]
fn split_multibyte_across_two_chunks() {
// "你" = E4 BD A0 (3 bytes)
let bytes = "".as_bytes();
assert_eq!(bytes.len(), 3);
let mut buf = String::new();
let mut rem = Vec::new();
// Chunk 1: first 2 bytes (incomplete)
append_utf8_safe(&mut buf, &mut rem, &bytes[..2]);
assert_eq!(buf, "");
assert_eq!(rem.len(), 2);
// Chunk 2: last byte completes the character
append_utf8_safe(&mut buf, &mut rem, &bytes[2..]);
assert_eq!(buf, "");
assert!(rem.is_empty());
}
#[test]
fn split_four_byte_char_across_chunks() {
// 😀 = F0 9F 98 80 (4 bytes)
let bytes = "😀".as_bytes();
assert_eq!(bytes.len(), 4);
let mut buf = String::new();
let mut rem = Vec::new();
// Send 1 byte at a time
append_utf8_safe(&mut buf, &mut rem, &bytes[..1]);
assert_eq!(buf, "");
assert_eq!(rem.len(), 1);
append_utf8_safe(&mut buf, &mut rem, &bytes[1..2]);
assert_eq!(buf, "");
assert_eq!(rem.len(), 2);
append_utf8_safe(&mut buf, &mut rem, &bytes[2..3]);
assert_eq!(buf, "");
assert_eq!(rem.len(), 3);
append_utf8_safe(&mut buf, &mut rem, &bytes[3..]);
assert_eq!(buf, "😀");
assert!(rem.is_empty());
}
#[test]
fn mixed_ascii_and_split_multibyte() {
// "hi你" = 68 69 E4 BD A0
let all = "hi你".as_bytes();
assert_eq!(all.len(), 5);
let mut buf = String::new();
let mut rem = Vec::new();
// Chunk 1: "hi" + first byte of "你"
append_utf8_safe(&mut buf, &mut rem, &all[..3]);
assert_eq!(buf, "hi");
assert_eq!(rem.len(), 1);
// Chunk 2: remaining 2 bytes of "你"
append_utf8_safe(&mut buf, &mut rem, &all[3..]);
assert_eq!(buf, "hi你");
assert!(rem.is_empty());
}
#[test]
fn multiple_split_characters_in_sequence() {
let text = "你好";
let bytes = text.as_bytes(); // E4 BD A0 E5 A5 BD
let mut buf = String::new();
let mut rem = Vec::new();
// Split in the middle: first char complete + 1 byte of second
append_utf8_safe(&mut buf, &mut rem, &bytes[..4]);
assert_eq!(buf, "");
assert_eq!(rem.len(), 1);
// Remaining 2 bytes complete second char
append_utf8_safe(&mut buf, &mut rem, &bytes[4..]);
assert_eq!(buf, "你好");
assert!(rem.is_empty());
}
#[test]
fn empty_chunks_are_harmless() {
let mut buf = String::new();
let mut rem = Vec::new();
append_utf8_safe(&mut buf, &mut rem, b"");
assert_eq!(buf, "");
assert!(rem.is_empty());
append_utf8_safe(&mut buf, &mut rem, b"ok");
assert_eq!(buf, "ok");
append_utf8_safe(&mut buf, &mut rem, b"");
assert_eq!(buf, "ok");
}
#[test]
fn sse_json_with_chinese_split_at_boundary() {
// Simulates an SSE data line with Chinese content split across chunks
let json_line = "data: {\"text\":\"你好\"}\n\n";
let bytes = json_line.as_bytes();
// Find where "你" starts in the byte stream and split there
let ni_start = bytes.windows(3).position(|w| w == "".as_bytes()).unwrap();
let split_point = ni_start + 1; // split inside "你"
let mut buf = String::new();
let mut rem = Vec::new();
append_utf8_safe(&mut buf, &mut rem, &bytes[..split_point]);
append_utf8_safe(&mut buf, &mut rem, &bytes[split_point..]);
assert_eq!(buf, json_line);
assert!(rem.is_empty());
// Verify the buffer can be parsed as SSE with valid JSON
let data = strip_sse_field(buf.lines().next().unwrap(), "data").unwrap();
let parsed: serde_json::Value = serde_json::from_str(data).unwrap();
assert_eq!(parsed["text"], "你好");
}
#[test]
fn invalid_bytes_flushed_immediately_not_accumulated() {
// 0xFF is never valid in UTF-8 it should be replaced immediately,
// not stashed in remainder.
let mut buf = String::new();
let mut rem = Vec::new();
// "hi" + invalid byte + "ok"
append_utf8_safe(&mut buf, &mut rem, b"hi\xFFok");
assert!(
rem.is_empty(),
"remainder should be empty after invalid byte"
);
assert!(buf.contains("hi"), "valid prefix must be present");
assert!(buf.contains("ok"), "valid suffix must be present");
assert!(buf.contains('\u{FFFD}'), "invalid byte must produce U+FFFD");
}
#[test]
fn invalid_byte_in_slow_path_flushed_immediately() {
let mut buf = String::new();
let mut rem = Vec::new();
// Prime remainder with an incomplete sequence (first byte of "你")
append_utf8_safe(&mut buf, &mut rem, &"".as_bytes()[..1]);
assert_eq!(rem.len(), 1);
// Next chunk starts with an invalid byte the stale remainder and the
// invalid byte should both be flushed, not accumulated.
append_utf8_safe(&mut buf, &mut rem, b"\xFFworld");
assert!(rem.is_empty(), "remainder should be empty");
assert!(
buf.contains("world"),
"valid data after invalid byte must appear"
);
}
#[test]
fn defensive_guard_flushes_oversized_remainder() {
let mut buf = String::new();
let mut rem = Vec::new();
// Manually inject 4 invalid bytes into remainder to trigger the >3 guard.
// This can't happen with well-formed UTF-8, but tests the safety net.
rem.extend_from_slice(b"\x80\x80\x80\x80");
assert_eq!(rem.len(), 4);
append_utf8_safe(&mut buf, &mut rem, b"hello");
// The 4 invalid bytes should have been flushed lossy, then "hello" decoded.
assert!(rem.is_empty(), "remainder must be empty after guard flush");
assert!(
buf.contains("hello"),
"valid data after guard flush must appear"
);
// The 4 invalid bytes each produce a U+FFFD
let replacement_count = buf.chars().filter(|&c| c == '\u{FFFD}').count();
assert_eq!(
replacement_count, 4,
"each invalid byte should produce one U+FFFD"
);
}
}
+13 -31
View File
@@ -851,26 +851,6 @@ pub(crate) fn sync_current_provider_for_app_to_live(
Ok(())
}
fn read_codex_live_settings_with_auth_fallback(
fallback_auth: Option<Value>,
) -> Result<Value, AppError> {
let auth_path = get_codex_auth_path();
let auth = if auth_path.exists() {
read_json_file(&auth_path)?
} else if let Some(auth) = fallback_auth {
auth
} else {
return Err(AppError::localized(
"codex.auth.missing",
"Codex 配置文件不存在:缺少 auth.json",
"Codex configuration missing: auth.json not found",
));
};
let cfg_text = crate::codex_config::read_and_validate_codex_config_text()?;
Ok(json!({ "auth": auth, "config": cfg_text }))
}
/// Sync current provider to live configuration
///
/// 使用有效的当前供应商 ID(验证过存在性)。
@@ -915,20 +895,22 @@ pub fn sync_current_to_live(state: &AppState) -> Result<(), AppError> {
Ok(())
}
pub(crate) fn read_live_settings_with_auth_fallback(
app_type: AppType,
fallback_auth: Option<Value>,
) -> Result<Value, AppError> {
match app_type {
AppType::Codex => read_codex_live_settings_with_auth_fallback(fallback_auth),
_ => read_live_settings(app_type),
}
}
/// Read current live settings for an app type
pub fn read_live_settings(app_type: AppType) -> Result<Value, AppError> {
match app_type {
AppType::Codex => read_codex_live_settings_with_auth_fallback(None),
AppType::Codex => {
let auth_path = get_codex_auth_path();
if !auth_path.exists() {
return Err(AppError::localized(
"codex.auth.missing",
"Codex 配置文件不存在:缺少 auth.json",
"Codex configuration missing: auth.json not found",
));
}
let auth: Value = read_json_file(&auth_path)?;
let cfg_text = crate::codex_config::read_and_validate_codex_config_text()?;
Ok(json!({ "auth": auth, "config": cfg_text }))
}
AppType::Claude => {
let path = get_claude_settings_path();
if !path.exists() {
+7 -29
View File
@@ -22,16 +22,15 @@ use crate::store::AppState;
// Re-export sub-module functions for external access
pub use live::{
import_default_config, import_openclaw_providers_from_live,
import_opencode_providers_from_live, sync_current_to_live,
import_opencode_providers_from_live, read_live_settings, sync_current_to_live,
};
// Internal re-exports (pub(crate))
pub(crate) use live::sanitize_claude_settings_for_live;
pub(crate) use live::{
build_effective_settings_with_common_config, normalize_provider_common_config_for_storage,
provider_exists_in_live_config, read_live_settings_with_auth_fallback,
strip_common_config_from_live_settings, sync_current_provider_for_app_to_live,
write_live_with_common_config,
provider_exists_in_live_config, strip_common_config_from_live_settings,
sync_current_provider_for_app_to_live, write_live_with_common_config,
};
// Internal re-exports
@@ -1474,16 +1473,8 @@ impl ProviderService {
// no backfill needed (backfill is for exclusive mode apps like Claude/Codex/Gemini)
if !app_type.is_additive_mode() {
// Only backfill when switching to a different provider
if let Some(mut current_provider) = providers.get(&current_id).cloned() {
let fallback_auth = if matches!(app_type, AppType::Codex) {
current_provider.settings_config.get("auth").cloned()
} else {
None
};
if let Ok(live_config) =
read_live_settings_with_auth_fallback(app_type.clone(), fallback_auth)
{
if let Ok(live_config) = read_live_settings(app_type.clone()) {
if let Some(mut current_provider) = providers.get(&current_id).cloned() {
current_provider.settings_config =
strip_common_config_from_live_settings(
state.db.as_ref(),
@@ -1906,21 +1897,8 @@ impl ProviderService {
}
/// Read current live settings (re-export)
pub fn read_live_settings(state: &AppState, app_type: AppType) -> Result<Value, AppError> {
let fallback_auth = if matches!(app_type, AppType::Codex) {
let current_id = crate::settings::get_effective_current_provider(&state.db, &app_type)?;
match current_id {
Some(current_id) => state
.db
.get_provider_by_id(&current_id, app_type.as_str())?
.and_then(|provider| provider.settings_config.get("auth").cloned()),
None => None,
}
} else {
None
};
read_live_settings_with_auth_fallback(app_type, fallback_auth)
pub fn read_live_settings(app_type: AppType) -> Result<Value, AppError> {
read_live_settings(app_type)
}
/// Get custom endpoints list (re-export)
+2 -141
View File
@@ -1,8 +1,8 @@
use serde_json::json;
use cc_switch_lib::{
get_claude_settings_path, get_codex_config_path, read_json_file, write_codex_live_atomic,
AppError, AppType, McpApps, McpServer, MultiAppConfig, Provider, ProviderMeta, ProviderService,
get_claude_settings_path, read_json_file, write_codex_live_atomic, AppError, AppType, McpApps,
McpServer, MultiAppConfig, Provider, ProviderMeta, ProviderService,
};
#[path = "support.rs"]
@@ -238,145 +238,6 @@ command = "say"
);
}
#[test]
fn provider_service_switch_codex_backfills_current_provider_when_auth_json_missing() {
let _guard = test_mutex().lock().expect("acquire test mutex");
reset_test_fs();
let _home = ensure_test_home();
let live_config = r#"[mcp_servers.legacy]
type = "stdio"
command = "echo"
"#;
let config_path = get_codex_config_path();
if let Some(parent) = config_path.parent() {
std::fs::create_dir_all(parent).expect("create codex dir");
}
std::fs::write(&config_path, live_config).expect("seed codex config without auth.json");
let mut initial_config = MultiAppConfig::default();
{
let manager = initial_config
.get_manager_mut(&AppType::Codex)
.expect("codex manager");
manager.current = "old-provider".to_string();
manager.providers.insert(
"old-provider".to_string(),
Provider::with_id(
"old-provider".to_string(),
"Legacy".to_string(),
json!({
"auth": {"OPENAI_API_KEY": "db-key"},
"config": "stale-config"
}),
None,
),
);
manager.providers.insert(
"new-provider".to_string(),
Provider::with_id(
"new-provider".to_string(),
"Latest".to_string(),
json!({
"auth": {"OPENAI_API_KEY": "fresh-key"},
"config": r#"[mcp_servers.latest]
type = "stdio"
command = "say"
"#
}),
None,
),
);
}
let state = create_test_state_with_config(&initial_config).expect("create test state");
ProviderService::switch(&state, AppType::Codex, "new-provider")
.expect("switch provider should succeed without auth.json");
let providers = state
.db
.get_all_providers(AppType::Codex.as_str())
.expect("read providers after switch");
let legacy = providers
.get("old-provider")
.expect("legacy provider should still exist");
assert_eq!(
legacy
.settings_config
.get("auth")
.and_then(|v| v.get("OPENAI_API_KEY"))
.and_then(|v| v.as_str()),
Some("db-key"),
"missing auth.json should fall back to the provider's stored auth during backfill"
);
assert_eq!(
legacy
.settings_config
.get("config")
.and_then(|v| v.as_str()),
Some(live_config),
"backfill should still capture the current live config.toml when auth.json is missing"
);
}
#[test]
fn provider_service_read_live_settings_uses_current_provider_auth_when_auth_json_missing() {
let _guard = test_mutex().lock().expect("acquire test mutex");
reset_test_fs();
let _home = ensure_test_home();
let live_config = r#"[mcp_servers.current]
type = "stdio"
command = "echo"
"#;
let config_path = get_codex_config_path();
if let Some(parent) = config_path.parent() {
std::fs::create_dir_all(parent).expect("create codex dir");
}
std::fs::write(&config_path, live_config).expect("seed codex config without auth.json");
let mut initial_config = MultiAppConfig::default();
{
let manager = initial_config
.get_manager_mut(&AppType::Codex)
.expect("codex manager");
manager.current = "current-provider".to_string();
manager.providers.insert(
"current-provider".to_string(),
Provider::with_id(
"current-provider".to_string(),
"Current".to_string(),
json!({
"auth": {"OPENAI_API_KEY": "db-key"},
"config": "provider-config"
}),
None,
),
);
}
let state = create_test_state_with_config(&initial_config).expect("create test state");
let settings = ProviderService::read_live_settings(&state, AppType::Codex)
.expect("should recover codex live settings from provider auth");
assert_eq!(
settings
.get("auth")
.and_then(|v| v.get("OPENAI_API_KEY"))
.and_then(|v| v.as_str()),
Some("db-key"),
"live settings should reuse stored provider auth when auth.json is missing"
);
assert_eq!(
settings.get("config").and_then(|v| v.as_str()),
Some(live_config),
"live settings should still read config.toml from disk"
);
}
#[test]
fn sync_current_provider_for_app_keeps_live_takeover_and_updates_restore_backup() {
let _guard = test_mutex().lock().expect("acquire test mutex");