Linux users reported the window UI (including native title bar buttons)
couldn't receive clicks until manually maximizing and restoring the
window. Root causes: (1) Tauri webview did not acquire focus on startup
so first clicks were consumed by X11/Wayland click-to-activate
(Tauri #10746, wry #637); (2) GTK surface input region failed to
renegotiate on the visible:false + show() path under some
WebKitGTK/compositor combinations.
- Add linux_fix::nudge_main_window helper that performs set_focus plus
a ±1px no-op resize after window show, with a 500ms reconciliation
readback to compensate for dropped resize requests on slow
compositors.
- Wire the helper into every window re-show path: normal startup,
deeplink, single_instance, tray show_main, and lightweight exit.
- Set WEBKIT_DISABLE_COMPOSITING_MODE=1 at startup to avoid resize
crashes and Wayland surface negotiation issues.
- Remove data-tauri-drag-region on Linux from App.tsx header and the
shared FullScreenPanel (used by all provider/MCP/workspace forms)
to avoid Tauri #13440 in Wayland sessions. Extract drag-region
constants to src/lib/platform.ts for reuse.
All Rust changes are gated by #[cfg(target_os = "linux")]; frontend
changes preserve macOS/Windows behavior via runtime isLinux() checks.
Known limitation: tiling Wayland compositors ignore set_size, so
GDK_BACKEND=x11 remains the user-side workaround.
Use "Skills" consistently in skillStorage title/description and
skillSync title to match the upstream Agent Skills wording and the
existing English label style used elsewhere on the settings page.
- Trim Auth Center section descriptions to focus on user intent
- Remove duplicate outer heading on the auth settings tab
- Swap Sparkles glyph for CodexIcon on the ChatGPT card
- Generalize codexOauth.authStatus to a neutral "Auth status"
- Register settings.authCenter.* keys across zh/en/ja locales
Codex OAuth (ChatGPT Plus/Pro) providers previously fell through to the
default UsageFooter branch and showed no quota at all, while Copilot and
official Codex providers already had a wham/usage-backed quota footer.
This wires up the same five-hour / seven-day tier badges for codex_oauth
provider cards by reusing the existing query_codex_quota function and
SubscriptionQuotaFooter rendering, parameterized to keep both the CLI
credential path ("codex") and the cc-switch managed OAuth path
("codex_oauth") working from a single source of truth.
- Parameterize services::subscription::query_codex_quota with tool_label
and expired_message; promote SubscriptionQuota constructors to
pub(crate). The CLI path keeps its existing "codex" label and the
"re-login with Codex CLI" message; the new path passes "codex_oauth"
and a cc-switch-specific re-login hint.
- Add a new get_codex_oauth_quota Tauri command in commands/codex_oauth.rs
that resolves the ChatGPT account (explicit binding > default account
> not_found), pulls a valid access_token from CodexOAuthManager
(auto-refresh handled), and delegates to query_codex_quota.
- Extract SubscriptionQuotaFooter's render body into a pure
SubscriptionQuotaView component (props: quota / loading / refetch /
appIdForExpiredHint / inline). The existing SubscriptionQuotaFooter
becomes a thin wrapper with identical props and behavior, so
CopilotQuotaFooter and the official Claude/Codex/Gemini paths are
untouched. This avoids duplicating ~280 lines of five-state rendering.
- Add CodexOauthQuotaFooter, a 38-line wrapper that calls the new
useCodexOauthQuota hook and forwards to SubscriptionQuotaView.
- ProviderCard inserts an isCodexOauth branch between isCopilot and
isOfficial, keyed off PROVIDER_TYPES.CODEX_OAUTH (newly added to
config/constants.ts to centralize the previously scattered string).
- Frontend hook caches per (codex_oauth, accountId) so multiple cards
bound to the same ChatGPT account share one fetch via react-query
dedup; cards bound to different accounts get independent fetches.
- No new i18n keys: existing subscription.fiveHour / sevenDay / expired /
refresh / queryFailed / expiredHint are reused.
Adds a new managed OAuth provider that lets Claude Code route requests
through a user's ChatGPT Plus/Pro subscription via the chatgpt.com
backend-api/codex endpoint.
- CodexOAuthManager: OpenAI Device Code flow with multi-account support,
JWT-based account identification, and automatic access_token refresh.
- Reuses the generic managed-auth command surface (auth_start_login,
auth_poll_for_account, etc.) via provider dispatch in commands/auth.rs.
- ClaudeAdapter detects codex_oauth providers, forces the base URL to
the ChatGPT backend, pins api_format to openai_responses, and emits
Authorization + originator headers; the forwarder injects the dynamic
access_token and ChatGPT-Account-Id per request.
- transform_responses gains an is_codex_oauth path that aligns the body
with OpenAI's codex-rs ResponsesApiRequest contract: sets store:false,
appends reasoning.encrypted_content to include, strips max_output_tokens
/ temperature / top_p, injects default instructions/tools/parallel_tool_calls,
and forces stream:true. Covered by 9 new unit tests plus regression
guards for the non-Codex path.
- Stream check reuses the same transform flag so detection matches the
production request shape.
- Frontend adds CodexOAuthSection + useCodexOauth hook, integrates it
into ClaudeFormFields / ProviderForm / AuthCenterPanel, ships a new
"Codex (ChatGPT Plus/Pro)" preset, and adds zh/en/ja i18n strings.
Copilot usage query API was implemented but never surfaced on the main
provider list. Add CopilotQuotaFooter component that auto-detects
github_copilot providers and displays premium interaction utilization
inline, reusing the existing TierBadge UI from SubscriptionQuotaFooter.
Session logs use placeholder provider_ids (_session, _codex_session,
_gemini_session) that don't exist in the providers table, causing LEFT
JOIN to return NULL and display "Unknown". Add COALESCE fallback in all
4 usage queries to show meaningful names like "Claude (Session)".
Add dashboard-level app type filter to usage statistics, replacing the
DataSourceBar with a more useful segmented control. All components
(summary cards, trend chart, provider stats, model stats, request logs)
now respond to the selected app filter.
Backend: add optional app_type parameter to get_usage_summary,
get_daily_trends, get_provider_stats, and get_model_stats queries.
Frontend: new AppTypeFilter type, updated query keys with appType
dimension for proper cache separation, and RequestLogTable local
filter auto-locks when dashboard filter is active.
- Use UPSERT with WHERE guard instead of INSERT OR IGNORE, so updated
token values on existing messages are properly synced without
unnecessary rewrites of unchanged rows
- Include cached tokens in the skip-zero filter to stop silently
discarding pure cache-hit records
- Restrict file collection to session-*.json to match documented scope
and prevent ingesting non-session JSON files
Parse ~/.gemini/tmp/*/chats/session-*.json for precise per-message
token data (input/output/cached/thoughts). Integrates with existing
background sync and manual sync button alongside Claude and Codex.
Normalize model names from JSONL session logs before storage and pricing
lookup: lowercase, strip provider prefix (openai/), strip date suffixes
(-YYYY-MM-DD, -YYYYMMDD). Also clamp cached tokens to not exceed input.
- Fix 13 Chinese model prices that were stored as CNY values in USD
fields (DeepSeek, Kimi, MiniMax, GLM, Doubao, Mimo)
- Add 12 new models: GPT-5.4/mini/nano, o3, o4-mini, GPT-4.1/mini/nano,
Gemini 3.1 Pro/Flash Lite, Gemini 2.5 Flash Lite, Gemini 2.0 Flash,
DeepSeek Chat/Reasoner, Kimi K2.5
- Merge pricing migration into existing v7→v8 to avoid extra version bump
Parse Claude Code JSONL session files (~/.claude/projects/) and Codex
SQLite database (~/.codex/state_5.sqlite) to track API usage without
requiring proxy interception. This enables usage statistics for users
who don't use the proxy feature.
Key changes:
- Add session_usage.rs: incremental JSONL parser with message.id dedup
- Add session_usage_codex.rs: import thread-level token data from Codex
- Add data_source column to proxy_request_logs (proxy/session_log/codex_db)
- Add session_log_sync table for tracking parse offsets
- Background sync every 60s + manual sync via DataSourceBar UI
- Schema migration v7→v8
- i18n support for zh/en/ja
- Hide "暂无描述" text when skill has no description (skills.sh API
doesn't return descriptions), show empty spacer instead
- Change skills.sh result link from guessed subdirectory path to repo
root URL, since skillId doesn't reflect the actual nested path
Add skills.sh API integration allowing users to search and install from
a catalog of 91K+ agent skills directly within CC Switch. The search
results are converted to DiscoverableSkill objects and reuse the existing
install pipeline. Includes fallback directory search for repos where
skills are nested in subdirectories, and filters out non-GitHub sources.
Allow users to choose between storing skills in CC Switch's managed
directory (~/.cc-switch/skills/) or the Agent Skills open standard
directory (~/.agents/skills/). Includes migration logic that safely
moves files before updating settings, with confirmation dialog for
non-empty installations.
- Add content_hash and updated_at fields to skills table (DB migration v6→v7)
- Compute directory content hash on install/import/restore for version tracking
- Add check_updates command: downloads repos, compares hashes, returns update list
- Add update_skill command: backs up old files, re-downloads and replaces SSOT
- Backfill content_hash for existing skills on first update check
- Add "Check Updates" button and per-skill update badge/button in UnifiedSkillsPanel
- Add i18n keys for zh/en/ja
- Change default autoQueryInterval from 0 (disabled) to 5 minutes for
new usage scripts (Token Plan, Balance, and general templates)
- Fix controlled number inputs (timeout & interval) that couldn't be
cleared: defer validation to onBlur so users can delete and retype
Test and ConfigureUsage buttons are now always rendered instead of
conditionally, with a disabled style for providers that don't support
them (e.g. official subscriptions). This ensures consistent button
container width so the usage display aligns uniformly across all cards.
Add a new "Official" (官方) template type in the usage query panel that
queries account balance via each provider's native API endpoint.
Follows the same zero-script pattern as Token Plan — Rust handles the
HTTP call, frontend auto-detects the provider from base URL.
Supported providers and endpoints:
- DeepSeek: GET /user/balance
- StepFun: GET /v1/accounts
- SiliconFlow: GET /v1/user/info (cn + com)
- OpenRouter: GET /api/v1/credits
- Novita AI: GET /v3/user/balance
* fix: handle UTF-8 multi-byte characters split across stream chunk boundaries
Replace String::from_utf8_lossy with append_utf8_safe in all four SSE
streaming paths. When a multi-byte UTF-8 character (e.g. Chinese, emoji)
is split across TCP chunk boundaries, from_utf8_lossy silently replaces
the incomplete halves with U+FFFD (�). This caused intermittent garbled
output in Claude Code when using the Copilot reverse proxy, because the
format conversion streams reconstruct SSE events from the corrupted buffer.
The new append_utf8_safe function preserves incomplete trailing bytes in
a remainder buffer and merges them with the next chunk before decoding,
ensuring characters are never split during UTF-8 conversion.
Fixes: intermittent U+FFFD replacement characters in Claude Code output
via Copilot proxy (not reproducible with direct Copilot connections like
opencode because they pass through raw bytes without format conversion).
* style: fix cargo fmt formatting in UTF-8 boundary tests
---------
Co-authored-by: Cod1ng <codingts@gmail.com>
Co-authored-by: encodets <encodets@gmail.com>
Some OpenAI-compatible chat providers reject requests when Claude-side\nsystem fragments arrive as multiple system messages. Normalize the\nconverted OpenAI chat payload so system content becomes a single\nleading system message while leaving the rest of the message stream\nunchanged.\n\nConstraint: Nvidia/Qwen-style chat completions require a single leading system prompt\nRejected: Reorder system messages only | still leaves fragmented system prompts for strict backends\nConfidence: high\nScope-risk: narrow\nReversibility: clean\nDirective: Keep OpenAI chat system prompts normalized unless a provider explicitly requires fragmented system messages\nTested: cargo test proxy::providers::transform --manifest-path src-tauri/Cargo.toml\nNot-tested: Full end-to-end proxy capture against Nvidia upstream in this session\nRelated: #1881
The function is only called from read_gemini_credentials_from_keychain
which is already macOS-only. Without the gate, non-macOS CI fails with
dead-code error due to -D warnings.
usage_count is remaining quota (starts at total, decreases to 0),
not used count. Invert calculation so all providers consistently
show 0% when fresh and 100% when exhausted.
Add a new "Token Plan" template type in the usage query panel that
natively queries quota/usage from Chinese coding plan providers
(Kimi For Coding, Zhipu GLM, MiniMax) without requiring custom scripts.
- Rust backend: new coding_plan service with provider-specific API
queries (Kimi /v1/usages, Zhipu /api/monitor/usage/quota/limit,
MiniMax /coding_plan/remains) normalized into UsageResult
- Frontend: Token Plan template in UsageScriptModal with auto-detection
of provider based on ANTHROPIC_BASE_URL pattern matching
- Follows the same pattern as GitHub Copilot template (dedicated API
path in queryProviderUsage, no JS script needed)
Remove the hard block that prevented switching to providers requiring
proxy (OpenAI format, Copilot, full URL mode) when the proxy is not
running. Now the switch proceeds with a warning toast. Also deduplicate
the proxy hint info toast so it doesn't appear alongside the warning.
Re-enable GitHub Copilot provider preset and the OAuth auth center tab
that were temporarily hidden due to abnormal consumption rates. The
Copilot optimizer introduced in the previous commit addresses the
underlying issue.
Implement request classification, tool result merging, compact detection,
deterministic request IDs, and warmup downgrade for Copilot proxy.
The root cause was x-initiator being hardcoded to "user", making Copilot
count every API request (including tool callbacks and agent continuations)
as a separate premium interaction. The optimizer dynamically classifies
requests as "user" or "agent" based on message content analysis.
Closes#1813
Official providers use built-in subscription quota display instead of
custom usage scripts, and stream check is not applicable. Hide both
action buttons when isOfficialProvider is true.
- Read Gemini OAuth credentials from macOS Keychain (gemini-cli-oauth)
or legacy file (~/.gemini/oauth_creds.json)
- Auto-refresh expired access tokens using refresh_token (Google OAuth
tokens expire in ~1h, unlike Claude/Codex)
- Two-step API: loadCodeAssist for project ID, then retrieveUserQuota
for per-model quota buckets
- Classify models into Pro/Flash/Flash Lite categories, show min
remaining fraction as utilization percentage
- Extend isOfficialProvider() for Gemini (no API key + no base URL)
- Parameterize expiredHint i18n key with tool name for all three apps
Read Codex OAuth credentials from ~/.codex/auth.json (with macOS
Keychain fallback) and query chatgpt.com/backend-api/wham/usage to
show rate limit utilization on official Codex provider cards. Reuses
the same tier naming (five_hour, seven_day) for frontend i18n compat.
Read Claude OAuth credentials from macOS Keychain (with file fallback)
and query the Anthropic usage API to show quota utilization inline on
official provider cards. Includes compact countdown timer for reset
windows and hides the rarely-used seven_day_sonnet tier in inline mode.
Each app type (Claude/Codex/Gemini) now renders as a submenu instead
of flat items, keeping the top-level tray menu compact regardless of
provider count. The submenu label shows the current provider name
(e.g. "Claude · OpenRouter") for at-a-glance visibility.
Distinguish between missing API key, missing endpoint, auth failure,
unsupported provider (404/405), and timeout errors instead of showing
a generic failure toast for all cases.
Add ability to fetch available models from third-party aggregation
providers (SiliconFlow, OpenRouter, etc.) via OpenAI-compatible
GET /v1/models endpoint. Users can click "Fetch Models" button in
the provider form, then select models from a dropdown on each
model input field.
- Backend: new model_fetch service + Tauri command (Rust)
- Frontend: ModelInputWithFetch shared component
- Integrated into all 5 app forms (Claude/Codex/Gemini/OpenCode/OpenClaw)
- i18n support for zh/en/ja
* fix(copilot): 修复 GitHub Copilot 400 认证错误
问题:使用 GitHub Copilot provider 时报错 400 bad request
根因:与 copilot-api 项目对比发现多处差异
修复内容:
- 更新版本号 0.26.7 到 0.38.2
- 更新 API 版本 2025-04-01 到 2025-10-01
- 添加缺失的关键 headers
- 修正 openai-intent 值
- 添加动态 API endpoint 支持
- 同步更新 stream_check.rs headers
Closes#1777
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix: flush stream after write_all in hyper_client proxy
Add explicit flush() calls after write_all() for TLS stream, plain TCP
stream, and CONNECT tunnel requests to ensure buffered data is sent
immediately, preventing connection hangs in Copilot auth header flow.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* 修复登录时的剪切板在mac与linux端可能没复制验证码
* fix: flush stream after write_all in hyper_client proxy
Add explicit flush() calls after write_all() for TLS stream, plain TCP
stream, and CONNECT tunnel requests to ensure buffered data is sent
immediately, preventing connection hangs in Copilot auth header flow.
* 修复登录时的剪切板在mac与linux端可能没复制验证码
* 1、修复不同类型的个人商业等不同类型的copilot账号问题
2、将验证码复制改为异步操作
* fix: address PR review comments for Copilot auth │
│ │
│ - Fix clipboard blocking by using spawn_blocking for arboard ops │
│ - Implement dynamic endpoint routing for enterprise Copilot users │
│ - Add api_endpoints cache cleanup in remove_account() and clear_auth() │
│ - Change API endpoint log level from info to debug │
│ - Fix clear_auth() to continue cleanup even if file deletion fails │
│ - Add 9 unit tests for Copilot detection and api_endpoints cachin
* style: fix cargo fmt formatting
* Fix Copilot dynamic endpoint handling
* fix: restore clear_auth() memory-first cleanup order and fix cache leaks
- Restore clear_auth() to clean memory state before deleting the storage
file. The previous order (file deletion first) caused a regression where
users could get stuck in a "cannot log out" state if file removal failed.
- Add missing copilot_models.clear() in clear_auth() — this cache was
cleaned in remove_account() but never in the full clear path.
- Add endpoint_locks cleanup in both remove_account() and clear_auth()
to prevent minor in-process memory leaks.
- Update test to assert the correct behavior: memory should be cleaned
even when file deletion fails.
---------
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: 周梦泽 <mengze.zhou@dafeng-tech.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Jason <farion1231@gmail.com>