Commit Graph

659 Commits

Author SHA1 Message Date
xpfo
7ddf7ffd66 fix(proxy): handle Codex OAuth responses as streaming (#2235) 2026-04-23 08:54:31 +08:00
Jason
c7ba3cf51d fix(pricing): correct Kimi K2.6 seed prices to match official rates
Moonshot's official USD pricing for kimi-k2.6 is $0.95 input /
$4.00 output / $0.16 cache-hit per 1M tokens (~58-60% higher than
K2.5). The previous commit copied K2.5's $0.60/$2.50/$0.10, which
would have under-billed K2.6 traffic in the usage dashboard.

No migration needed since this version is unreleased; INSERT OR
IGNORE will write the correct values on first launch.
2026-04-22 14:32:03 +08:00
Jason
85295cf23f chore(release): bump version to 3.14.0 2026-04-22 12:29:27 +08:00
Jason
1f25411524 feat(presets): upgrade Kimi K2.5 to K2.6 in direct Moonshot configs
Bump model id and display name from K2.5 to K2.6 in Hermes, OpenClaw,
OpenCode, and Claude (direct api.moonshot.cn) presets. Pricing,
context window, and base URL are unchanged.

Add kimi-k2.6 row to model_pricing seed; no migration needed since
seed_model_pricing uses INSERT OR IGNORE and runs on every startup
via ensure_model_pricing_seeded. Old kimi-k2.5 row is kept to
preserve historical usage stats.

Nvidia aggregator forwards (moonshotai/kimi-k2.5) intentionally keep
the K2.5 SKU until Nvidia's catalog confirms K2.6.
2026-04-22 12:17:19 +08:00
Jason
7ac822d241 fix(hermes): stop health check from borrowing OpenClaw schema
Hermes providers were routed through check_additive_app_stream, the
OpenClaw dispatcher, which reads camelCase fields (baseUrl/apiKey/api)
and emits "OpenClaw is missing ..." errors. Hermes stores snake_case
fields (base_url/api_key/api_mode) with different protocol tags, so
users saw "OpenClaw provider is missing baseUrl" even after filling in
every Hermes field correctly.

Introduce check_hermes_stream with Hermes-specific extractors. Route
api_mode (chat_completions / anthropic_messages / codex_responses) to
the matching check_claude_stream api_format, and return bedrock_converse
as unsupported. Resolve api_mode before extracting URL/API key so users
who picked bedrock_converse see the real cause first rather than a
misleading "missing base_url" message.
2026-04-21 23:00:37 +08:00
Jason
8e9452b7ec feat(hermes): launch dashboard from toolbar when Web UI is offline
When the Hermes Web UI probe fails, the toolbar entry now opens an info
confirm dialog offering to run `hermes dashboard` in the user's preferred
terminal. Accepting spawns it via a temp bash/batch script; `hermes
dashboard` itself opens the browser once ready, so we do not poll.
The Memory panel and Health banner keep the existing toast behavior.

Also corrects the stale `hermes web` hint in the offline toast (the real
command is `hermes dashboard`) and reorders Linux terminal detection to
try `which` before stat'ing /usr/bin, /bin, /usr/local/bin.
2026-04-21 21:11:15 +08:00
Jason
1227e29d8b feat(skills): enable Hermes in unified Skills management
Wire hermes through SkillApps struct, DAO SQL, command parser, and
SKILLS_APP_IDS. Add a Skills entry to the Hermes toolbar. Simplify
skill_sync test fixtures to use SkillApps::default().
2026-04-21 11:57:07 +08:00
Jason
0be75668cc feat(hermes): align provider schema with Hermes Agent 0.10.0
Hermes 0.10.0 tightened custom_providers validation (commit 2cdae233):
invalid base_urls are rejected, unknown fields produce warnings, and
new fields (rate_limit_delay, bedrock_converse, key_env) landed.

- Add bedrock_converse to the api_mode selector (and i18n labels)
- Expose rate_limit_delay in a provider-level advanced panel
- Validate base_url client-side (URL shape, template-token friendly)
- Drop per-model max_tokens — not in _VALID_CUSTOM_PROVIDER_FIELDS
- Round-trip test asserts set_provider preserves rate_limit_delay /
  key_env / any unknown forward-compat field
2026-04-21 11:57:07 +08:00
Jason
111ddf8d73 style: apply prettier and rustfmt
No behavior changes. Brings three files back in line with the project
formatters (CI runs `pnpm format:check` and `cargo fmt --check`).
2026-04-21 11:57:07 +08:00
Jason
08727528aa test: sync stale fixtures and isolate openclaw env tests
Three unrelated test failures surfaced after rebase:

- McpFormModal expected the apps boolean set without `hermes`; Hermes MCP
  support is now wired, so the fixture must include `hermes: false`.
- therouter Gemini preset was bumped to `gemini-3.1-pro` in a later
  commit; update the assertions to match current config.
- openclaw_config tests mutate process-level `CC_SWITCH_TEST_HOME` and
  `HOME` inside a module-local Mutex, but hermes_config does the same
  under its own separate Mutex. Running both modules in parallel let the
  env races corrupt hermes_config's `with_test_home`. Tag the four
  env-mutating openclaw tests with `#[serial]` so they serialize across
  modules via serial_test's process-wide default key.
2026-04-21 11:57:07 +08:00
Jason
21a1518f9f refactor(hermes): drop "Auto" api_mode and require explicit protocol
Hermes' built-in api_mode detection only matches a handful of official
endpoints (api.openai.com, api.anthropic.com, api.x.ai, AWS Bedrock);
third-party / proxy endpoints silently fall back to chat_completions,
which causes opaque 401/404s on Anthropic-protocol or Codex-Responses
providers. The "Auto" option was misleading for the common third-party
case.

- Drop the "Auto" option from the API Mode dropdown; remove the
  HermesApiModeChoice sentinel type so writes always emit api_mode.
- Default new providers and legacy entries lacking api_mode to
  chat_completions (only persisted on user save).
- Deeplink imports now write api_mode: chat_completions explicitly
  instead of relying on URL heuristics; test renamed accordingly.
- Rename the "Codex Responses (Copilot / OpenCode)" label to
  "OpenAI Responses" to match OpenAI's /v1/responses naming.
2026-04-21 11:57:07 +08:00
Jason
f57edfd697 refactor(hermes): share provider-source marker constants and write guard
After /simplify review of the P1-3 second wave, two small cleanups:

- Lift the `_cc_source` / `providers_dict` magic strings out of
  ProviderCard into a shared helper (`isHermesReadOnlyProvider`) and
  named constants in hermesProviderPresets.ts. Front-end and back-end
  now document the same marker contract in two mirrored places
  instead of drifting strings.

- Replace the duplicate `is_dict_only_provider` + `format!` branches
  at the top of `set_provider` / `remove_provider` with a single
  `ensure_provider_writable(config, name, verb)` guard. Future error
  copy tweaks only have to happen once.

No behaviour change; all 52 hermes_config tests stay green.
2026-04-21 11:57:07 +08:00
Jason
3f4739365e feat(hermes): surface providers: dict entries read-only
Hermes v12+ migrated some provider entries from the `custom_providers:`
list into a `providers:` dict (keyed by id). CC Switch previously
ignored that source entirely, leaving users blind to providers they had
configured via Hermes' own Web UI; the only feedback was a generic
migration warning in the health banner.

`get_providers()` now unions both sources, matching upstream
`get_compatible_custom_providers` dedup order (list wins on name
collision). Entries coming from the dict carry a `_cc_source =
"providers_dict"` marker plus the original `provider_key`, which the
UI layer will use to render them read-only. `set_provider` and
`remove_provider` now refuse to touch dict-only entries, steering the
user to Hermes Web UI. `sanitize_hermes_provider_keys` strips the UI
markers on write so they never reach YAML.

The `schema_migrated_v12` health warning copy reframes the situation:
entries are shown read-only in CC Switch rather than invisible.
2026-04-21 11:57:07 +08:00
Jason
e10a300c80 fix(hermes): prevent YAML pollution and drop of OAuth mcp auth
DeepLink Hermes import was emitting camelCase (baseUrl / apiKey /
apiMode) that the Hermes runtime does not recognise, poisoning
`custom_providers:` entries on activation. The MCP sync path was
also stripping `auth: oauth` on round-trip, silently downgrading
OAuth-type servers to unauthenticated calls.

The Hermes deeplink branch now emits snake_case via a dedicated
builder; `sanitize_hermes_provider_keys` runs on both `set_provider`
and `get_providers` so legacy DB records heal on next access.
`HERMES_EXTRA_FIELDS` preserves `auth`. The `api_mode` dropdown gains
`codex_responses` (Copilot / OpenCode), and the schema-migrated
warning copy no longer hard-codes "v12" (upstream `_config_version`
is now 19).
2026-04-21 11:57:07 +08:00
Jason
31fb998575 refactor(hermes): simplify schema handling + preserve unknown provider fields
Drops the v11→v12 providers-dict compat layer: CC Switch now only
reads/writes `custom_providers:`, leaving migrated `providers:` dict
entries to Hermes Web UI for reconciliation (Hermes' runtime already
merges both shapes via `get_compatible_custom_providers`). The
`schema_migrated_v12` health warning now points users there when a
dict-migrated config is detected.

Adds forward-compat merge to `set_provider`: when updating an existing
entry, on-disk fields the UI payload didn't submit (e.g. Hermes-only
`request_timeout_seconds`, `key_env`) are carried over. Without this,
editing one field via CC Switch would silently strip the rest.

Adds `set_memory_enabled` + `set_hermes_memory_enabled` Tauri command
for the upcoming memory-switch UI. Writes go through a merge-aware
section replacement so character budgets and external-provider fields
survive toggle operations.

Removes four dict-only helpers (`normalize_providers_dict_entry_for_read`,
`rename_alias_key`, `json_obj_non_empty_str`,
`resolve_provider_name_from_yaml_entry`) and the multi-section write
helper. Simplifies `get_providers` / `remove_provider` / health scan
back to list-only. Replaces nine obsolete dict-related tests with
`set_provider_preserves_unknown_fields_on_update` and
`set_memory_enabled_preserves_other_fields`.
2026-04-21 11:57:07 +08:00
Jason
acc6d795e4 feat(hermes): replace Prompts entry with Memory panel
Hermes has no slash-prompt concept (templates live as Skills), so the
Prompts tab for the Hermes app was always empty. Swap the toolbar Book
button for a Brain button that opens a new Memory panel editing
~/.hermes/memories/{MEMORY,USER}.md — Hermes' first-class memory store
which its Web UI exposes only as on/off toggles, never as an editor.

The panel shows each file in its own tab with a character-budget bar
read from config.yaml's nested memory.* section (memory_char_limit /
user_char_limit, default 2200 / 1375). Edits are written atomically;
Hermes picks them up on the next session start per MemoryStore.

Also extract useDarkMode to src/hooks/useDarkMode.ts — the codebase
already repeats the same MutationObserver pattern in 12+ places; this
PR introduces the shared hook and uses it once, leaving the migration
of the other copies to a follow-up.
2026-04-21 11:57:07 +08:00
Jason
088b47b08a refactor(hermes): delegate deep config to Hermes Web UI
Slim the Hermes surface in CC Switch to match its core positioning —
cross-client provider switching and shared MCP/prompts/skills — and
delegate deep configuration (model, agent, env, skills, cron, logs)
to the Hermes Web UI at http://127.0.0.1:9119.

- Drop AgentPanel/EnvPanel/ModelPanel and their mutation commands,
  hooks, types, and i18n keys across zh/en/ja.
- Add open_hermes_web_ui Tauri command that probes /api/status and
  launches the URL in the system browser. Hermes injects its own
  session token into the returned HTML, so CC Switch doesn't need
  to touch auth.
- Surface the launcher from the Hermes toolbar and the health banner
  via a shared useOpenHermesWebUI() hook; the offline error code is
  defined once per side and referenced across the contract.
- Keep read-only access to model.provider so ProviderList can still
  highlight the active supplier; apply_switch_defaults continues to
  write the top-level model section when switching providers.

Net diff: +152 / -1253.
2026-04-21 11:57:06 +08:00
Jason
5056978d80 fix(hermes): persist providers under custom_providers so api_mode/model survive
Writing to the v12+ `providers:` dict broke every anthropic_messages
provider. Hermes `runtime_provider.py::_get_named_custom_provider` has a
bug in its `providers:` branch: the returned entry drops `api_mode`,
`transport`, `models`, and singular `model:`, and
`_resolve_named_custom_runtime` then falls back to `chat_completions` —
so an Anthropic-format endpoint receives OpenAI-format requests and
returns 404.

Keep using the legacy `custom_providers:` list; its normalization path
(`_normalize_custom_provider_entry`) preserves every field. In addition,
write a singular `model:` alongside the plural `models:` dict so the
Hermes runtime and `/model` picker see the default model id.

Also keep the `apply_switch_defaults` fix from the prior attempt:
`model.provider` is always updated, and `model.default` is only
overwritten when the new provider declares at least one model — so
switching to an incomplete provider no longer silently no-ops.
2026-04-21 11:57:06 +08:00
Jason
f935bac633 feat(hermes): bind per-provider models to top-level model: on switch
Hermes custom_providers entries now carry an ordered models array
(id / context_length / max_tokens) plus suggestedDefaults. The backend
serializes the array to the YAML dict shape Hermes expects on write and
inverts it on read, preserving insertion order via the preserve_order
feature on serde_json.

When a user switches providers, switch_normal calls apply_switch_defaults
so the top-level model.default / model.provider follow the selected
provider's first model. Previously switching a Hermes provider only
shuffled custom_providers[] and left Hermes pointing at whatever
model.provider was set before.

Seven existing Hermes presets now ship with a curated models list so
switching lands on a working default without a detour through the
Model panel.
2026-04-21 11:57:06 +08:00
Jason
63aa310576 feat(copilot): strip thinking blocks before forwarding to save premium quota
Copilot routes through OpenAI-compatible endpoints that reject Anthropic's
thinking and redacted_thinking blocks. Previously the request would fail
upstream, burning one premium interaction, and only then trigger
thinking_rectifier to retry. This adds a proactive strip_thinking_blocks
pass in the Copilot optimization pipeline (step 3.5, after tool_result
merging). Signature fields and top-level thinking are left alone — those
are the reactive rectifier's job on the error path.

Also fixes a default-value inconsistency where CopilotOptimizerConfig's
Default impl used "gpt-4o-mini" while the serde default function returned
"gpt-5-mini" (aligned to gpt-5-mini, matching the reference implementation).

Aligned with yuegongzi/copilot-api's /v1/messages handler behavior.
2026-04-21 11:57:06 +08:00
Jason
1684cb3233 feat(presets): migrate all aggregator and Bedrock presets to Claude Opus 4.7
- OpenClaw: replace opus-4-6 with opus-4-7 across 17 aggregator presets
  (id, name, primary, modelCatalog); AWS Bedrock entry rewritten to new
  SKU anthropic.claude-opus-4-7 (drops -v1 and dated suffix per official
  4.7 model card) and pricing corrected to $5/$25/$0.50/$6.25 during the
  SKU swap, aligning with schema.rs source of truth
- OpenCode: same replacement for 13 aggregators plus
  OPENCODE_PRESET_MODEL_VARIANTS entries for @ai-sdk/amazon-bedrock and
  @ai-sdk/anthropic, plus AWS Bedrock provider models map
- OpenRouter / TheRouter / GitHub Copilot in claudeProviderPresets use
  dot-style id; update to anthropic/claude-opus-4.7 (missed by 509d2250)
- omo: switch agent/category recommended to opus-4-7; replace key in
  OMO_BACKGROUND_TASK_PLACEHOLDER priority map
- hermes_config.rs: update doc comments and test fixtures to opus-4-7;
  Hermes ModelPanel placeholder and i18n defaultHint examples follow
- i18n unspecifiedHigh category description bumped to 'Claude Opus 4.7
  max variant' to match omo recommended
- Test fixtures updated: therouter preset assertion and opencode Bedrock
  variant lookup now check for opus-4-7
- Sonnet 4.6 / Haiku 4.5 untouched - no official 4.7 release for them
2026-04-21 11:57:06 +08:00
Jason
83c3c3b494 feat(pricing): add Claude Opus 4.7 with adaptive thinking and Bedrock SKU
- Seed claude-opus-4-7 pricing (same tier as 4.6: $5 / $25 / $0.50 /
  $6.25 per million tokens). Relies on incremental INSERT OR IGNORE
  seeding; no SCHEMA_VERSION bump needed.
- Whitelist opus-4-7 in thinking optimizer so it uses adaptive
  thinking + max effort + 1M context beta, matching 4.6 behavior.
- Bump default OPUS model in PIPELLM and AWS Bedrock (AKSK / API Key)
  presets to 4.7. Bedrock SKU drops the -v1 suffix per the official
  4.7 model card (anthropic.claude-opus-4-7 and
  global.anthropic.claude-opus-4-7).
2026-04-21 11:57:06 +08:00
Jason
d03e6f9951 chore(lint): pin Rust toolchain to 1.95 and adopt clippy 1.95 suggestions
- Add rust-toolchain.toml to align local and CI Rust versions, eliminating
  clippy roulette caused by `dtolnay/rust-toolchain@stable` drift.
- Fix 9 clippy 1.95 findings introduced by Hermes Phase 4-8 modules:
  * 4x unnecessary_sort_by  -> sort_by_key (with Reverse for desc)
  * 3x collapsible_match    -> match guards
  * 1x while_let_loop       -> while let
  * 1x useless_conversion   -> drop redundant .into_iter()
2026-04-21 11:57:06 +08:00
Jason
0ca36b9d51 fix: address Hermes review findings (5 medium issues)
- Add missing Hermes MCP import on first launch (lib.rs)
- Add Hermes branch in ProviderForm defaultValues fallback
- Include Hermes in session manager subtitle (zh/en/ja)
- Rename check_openclaw_stream to check_additive_app_stream
- Cache parsed HERMES_DEFAULT_CONFIG to avoid repeated JSON.parse
2026-04-21 11:57:06 +08:00
Jason
e8953c286f feat: implement Hermes session manager with SQLite + JSONL support (Phase 6)
- Add hermes.rs session provider with dual-source scanning:
  SQLite (state.db) as primary, JSONL transcripts as fallback
- Dynamic schema discovery via PRAGMA table_info for SQLite resilience
- Use read_head_tail_lines for efficient JSONL metadata extraction
  (head 30 lines for metadata, tail 10 for last_active_at)
- Support both flat and nested JSONL message formats
- Add SQLite session loading and transactional deletion
- Register hermes in parallel session scan (thread::scope)
- Add "hermes" to frontend ProviderFilter type
- 7 unit tests covering JSONL parsing, SQLite source parsing, deletion
2026-04-21 11:57:06 +08:00
Jason
240969d8c7 feat: add Hermes UI components, presets, and config panels (Phase 8)
- Add 7 provider presets (OpenRouter, Anthropic, OpenAI, Google, DeepSeek, Together, Nous)
- Create HermesFormFields + useHermesFormState for provider form integration
- Create Model/Agent/Env config panels with save/load functionality
- Create HermesHealthBanner for config warnings
- Add hermes icon (violet winged H) to icon system
- Integrate into App.tsx: 3 new view types (hermesModel/hermesAgent/hermesEnv),
  sidebar buttons (Brain/Bot/KeyRound), health banner, session support
- Integrate into ProviderForm: presets, form state, key validation, rendering
- Integrate into AddProviderDialog: universal tab exclusion, providerKey, base_url extraction
- Add i18n keys for all Hermes UI (zh/en/ja)
2026-04-21 11:57:06 +08:00
Jason
576ff53a75 feat: implement Hermes MCP sync module (Phase 4)
Add mcp/hermes.rs with bidirectional MCP format conversion:
- convert_to_hermes_format: strip type field, infer from command/url
- convert_from_hermes_format: infer type, strip Hermes-specific fields
- Merge-on-write: existing Hermes fields (tools, sampling, timeout,
  roots, enabled) preserved when user has customized them
- update_mcp_servers_yaml: closure-based read-modify-write under write
  lock to prevent TOCTOU races in concurrent sync operations
- 9 unit tests for format conversion and merge logic

Wire up all MCP service dispatch:
- Replace Hermes TODO stubs with real sync/remove calls
- Remove Hermes from sync_all_enabled skip list
- Enable deep link hermes MCP flag (apps.hermes = true)
- Add Hermes import to import_mcp_from_apps command
2026-04-21 11:57:06 +08:00
Jason
6d0e9f4c74 feat: implement Hermes config module and commands (Phase 3)
Add hermes_config.rs (~1190 lines) with YAML section-level replacement
that preserves comments and formatting in unmanaged sections:
- Type definitions: HermesModelConfig, HermesAgentConfig, HermesEnvConfig
- YAML section finder (find_yaml_section_range) with column-0 key detection
- Provider CRUD on custom_providers array (indexed by name field)
- Model/Agent config get/set via yaml<->json conversion
- .env dotenv read/write preserving comments and line ordering
- Health check, backup with rotation, write lock (OnceLock<Mutex>)
- MCP section access stubs for Phase 4
- 19 unit tests

Add commands/hermes.rs with 10 Tauri commands registered in lib.rs.
Replace all Hermes TODO stubs in services/provider/live.rs with real
implementations (import, remove, write-to-live, read-live-settings).
2026-04-21 11:57:06 +08:00
Jason
a2e9e1938b feat: add database migration v9→v10 for Hermes support (Phase 2)
- Bump SCHEMA_VERSION from 9 to 10
- Add enabled_hermes column to mcp_servers and skills tables
- Add migrate_v9_to_v10 with table_exists guard for skills (may not
  exist in databases migrated from very old versions)
- Update dao/mcp.rs to fully read/write enabled_hermes in all queries
- Update dao/skills.rs: don't SELECT enabled_hermes (Hermes doesn't
  support Skills yet), keep column indices clean
2026-04-21 11:57:06 +08:00
Jason
81af0a57f9 feat: add Hermes Agent as 6th supported app type (Phase 1)
Register AppType::Hermes across the entire Rust backend:
- Add Hermes variant to AppType enum with additive mode and MCP support
- Add hermes field to McpApps, SkillApps, CommonConfigSnippets, and all
  per-app structs (McpRoot, PromptRoot, VisibleApps, AppSettings)
- Create minimal hermes_config.rs with get_hermes_dir() respecting
  settings override, matching the pattern of other app config modules
- Update all match arms in commands, services, deeplink, proxy, mcp,
  session_manager, and test files
- Extract shared build_additive_app_settings() to eliminate duplication
  between OpenClaw and Hermes deep link handling
- Combine identical OpenClaw/Hermes proxy match arms into unified arms
2026-04-21 11:57:06 +08:00
Jason
e4c34b34e7 fix: remove ANTHROPIC_REASONING_MODEL to decouple thinking from model selection (#2081)
ANTHROPIC_REASONING_MODEL was a non-official env var that forced all
requests with thinking params to use a single "reasoning model",
overriding the user's /model selection. Since new Claude Code versions
send adaptive thinking by default, this caused /model to silently fail.

- Remove reasoning_model field and has_thinking_enabled() from model_mapper
- Simplify map_model() to pure type-based matching (haiku/sonnet/opus)
- Remove reasoning model UI field from provider form
- Retain ANTHROPIC_REASONING_MODEL in ENV_EXCLUDES and override-key
  cleanup lists so legacy configs don't leak into common config
2026-04-21 11:57:06 +08:00
Li Yang
61bfc29d82 fix(tray): use an app-specific tray id (#1978)
Co-authored-by: liyang <liyang25@pku.edu.cn>
2026-04-21 10:42:59 +08:00
Suda202
2c9252dec5 Fix Ghostty session restore launch path (#1976)
* fix: launch Ghostty via shell command

Use Ghostty's shell execution path instead of injecting raw terminal input so Claude resume commands run reliably when opening a session terminal.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(ghostty): pass cwd via --working-directory instead of shell string

Use Ghostty's native --working-directory flag to set the working
directory, matching the pattern used by Alacritty. This avoids shell
expansion of special characters (e.g. $VAR, spaces) in project paths.

The command is now passed directly to -c without a cd prefix.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-20 12:42:00 +08:00
hotelbe
87635e7fc6 feat(copilot): add GitHub Enterprise Server support (#2175)
* feat(copilot): add GitHub Enterprise Server support

* fix(copilot): address GHES PR review findings (P1 + 2×P2)

- P1: Use composite account ID (domain:user_id) for GHES to prevent
  cross-instance ID collisions; github.com keeps plain numeric ID for
  backward compatibilit
- P2-a: Use get_api_endpoint() for model list URL with automatic
  fallback to static URL when dynamic endpoint resolution fails
- P2-b: Add normalize_github_domain() as backend SSOT for domain
  normalization (lowercase, strip protocol/path/query, reject userinfo)

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-19 20:29:46 +08:00
YaoguoHH
9871d3d1eb fix(skills): sync imported skills to app directories after import (#2101)
`import_from_apps()` saves skills to the database but does not create
symlinks/copies in the target app directories (e.g. `~/.claude/skills/`).
This causes skills to appear as "installed" in the UI while the actual
files are missing from the app directories.

Add `sync_to_app_dir()` calls after `db.save_skill()` in the import
loop, matching the pattern used by `install()` and `toggle_app()`.
2026-04-19 15:25:25 +08:00
Coconut-Fish
1126c7459d Style/add provider.notes (#2138)
* style(FailoverQueueManager): 显示供应商备注信息

* style(FailoverQueueItem): 添加供应商备注字段以支持备注信息显示

* style(FailoverQueueManager): 显示供应商备注信息

* style(FailoverQueueItem): 添加供应商备注字段以支持备注信息显示

* style(FailoverQueueManager): 更新供应商备注信息的显示样式

* style(FailoverQueueItem): 添加条件序列化以优化供应商备注字段
2026-04-18 17:48:58 +08:00
Dex Miller
03a0f9661b feat(proxy): Gemini Native API proxy integration (#1918)
* refactor(proxy): extract take_sse_block helper with CRLF delimiter support

Replace inline `buffer.find("\n\n")` SSE splitting logic across streaming,
streaming_responses, response_handler, and response_processor with a shared
`take_sse_block` function that handles both `\n\n` and `\r\n\r\n` delimiters.

* feat(proxy): add Gemini Native URL builder and full-URL resolver

Introduce gemini_url module that normalizes legacy Gemini/OpenAI-compatible
base URLs into canonical models/*:generateContent endpoints. Supports both
structured Gemini URLs (auto-normalized) and opaque relay URLs (pass-through
with query params only).

* feat(proxy): add Gemini Native schema, shadow store, transform, and streaming

- gemini_schema: Gemini generateContent request/response type definitions
- gemini_shadow: session-scoped shadow store for thinking signature and
  tool-call state replay across streaming chunks
- transform_gemini: bidirectional Anthropic Messages ↔ Gemini Native
  request/response conversion with thinking block and tool-use support
- streaming_gemini: Gemini SSE → Anthropic SSE streaming adapter with
  incremental thinking/text/tool_use delta emission

* feat(proxy): wire Gemini Native format into proxy core and Claude adapter

Integrate gemini_native api_format throughout the proxy pipeline:
- ClaudeAdapter: detect Gemini provider type, Google/GoogleOAuth auth
  strategies, and suppress Anthropic-specific headers for Gemini targets
- Forwarder: Gemini URL resolution, shadow store threading, endpoint
  rewriting to models/*:generateContent with stream/non-stream variants
- Handlers: route Gemini streaming through streaming_gemini adapter and
  non-streaming through transform_gemini converter
- Server/State: add GeminiShadowStore to shared ProxyState
- StreamCheck: support gemini_native health check with proper auth headers

* feat(ui): add Gemini Native provider preset and api format option

- Add gemini_native to ClaudeApiFormat type and ProviderMeta.apiFormat
- Add "Gemini Native" provider preset with default Google AI endpoints
- Show Gemini-specific endpoint hints and full-URL mode guidance
- Add gemini_native option to API format selector in ClaudeFormFields
- Add i18n strings for zh/en/ja

* feat(proxy): add Gemini Native tool argument rectification

* feat(proxy): update Gemini streaming and transformation logic

* fix(proxy): align shadow turns to tail on client history truncation

* fix: revert unrelated cache_key change in claude proxy transform

Restore .unwrap_or(&provider.id) fallback for cache_key to match main
branch behavior. Only gemini_native related changes should be in this branch.

* Prevent Gemini review regressions in streaming and tool rectification

PR #1918 review feedback exposed two correctness issues in the Gemini Native adapter path. Gemini SSE buffering was still using lossy UTF-8 decoding, which could corrupt split multibyte payloads and drop streamed output. Tool arg rectification also removed top-level parameters eagerly, which broke tools that legitimately define a parameters field.

This change moves Gemini SSE buffering onto the existing append_utf8_safe path and makes parameters flattening conditional on the schema actually expecting nested extraction. The old Skill rectification path stays intact, and new regression tests cover both the preserved parameters case and UTF-8-split JSON payloads.

Constraint: Existing PR #1918 review feedback must be fixed without staging unrelated local docs and artifact files
Rejected: Keep String::from_utf8_lossy in Gemini SSE buffering | corrupts split multibyte payloads and can drop JSON chunks
Rejected: Always preserve the parameters wrapper | regresses the existing nested-parameters rectification path for Skill-style tools
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Keep Gemini SSE buffering on the UTF-8-safe accumulator path and only unwrap parameters when the target schema does not declare it as a legitimate field
Tested: cargo fmt --manifest-path src-tauri/Cargo.toml --all; cargo test --manifest-path src-tauri/Cargo.toml preserves_utf8_boundaries_when_json_payload_spans_chunks; cargo test --manifest-path src-tauri/Cargo.toml gemini_to_anthropic_rectifies_tool_args_from_schema_hints; cargo test --manifest-path src-tauri/Cargo.toml rectifies_streamed_skill_args_from_nested_parameters; cargo test --manifest-path src-tauri/Cargo.toml gemini_to_anthropic_preserves_legitimate_parameters_arg
Not-tested: Full src-tauri test suite; live end-to-end Gemini relay traffic against upstream services

* Keep Gemini tool replay stable across Claude request boundaries

Claude Code follow-up requests were still falling back to locally reconstructed functionCall parts, which dropped Gemini thought signatures and triggered INVALID_ARGUMENT errors from the official Gemini API. The replay path needed to survive real Claude request boundaries, not just idealized in-process test flows.

This change makes Claude requests reuse X-Claude-Code-Session-Id as the shadow session key, records streamed Gemini tool turns before tool_use events are fully drained, and matches assistant tool_use turns to shadow state by tool_use id and normalized tool name before positional fallback. Together these fixes keep thoughtSignature-bearing Gemini tool calls available for the next request in the loop.

Constraint: Claude Code sends a stable X-Claude-Code-Session-Id header while metadata.session_id may be absent on follow-up requests
Rejected: Rely on metadata-only Claude session extraction | generated fresh session ids and broke cross-request shadow replay
Rejected: Record Gemini shadow only after streaming completes | loses the race when the client sends the next request immediately after tool_use
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Preserve Gemini shadow continuity across requests by keying Claude sessions from the header first and persisting tool-call shadow before yielding tool_use events downstream
Tested: cargo fmt --manifest-path src-tauri/Cargo.toml --all; cargo test --manifest-path src-tauri/Cargo.toml test_extract_session_from_claude_header; cargo test --manifest-path src-tauri/Cargo.toml test_extract_session_from_claude_header_precedes_metadata; cargo test --manifest-path src-tauri/Cargo.toml stores_tool_shadow_before_tool_use_events_are_fully_drained; cargo test --manifest-path src-tauri/Cargo.toml shadow_replay_matches_tool_use_turn_by_id_when_position_drifts; cargo test --manifest-path src-tauri/Cargo.toml shadow_replay_aligns_to_latest_turns_after_client_truncation
Not-tested: Full src-tauri test suite without test filters; live end-to-end Gemini relay after this exact commit hash

* style: apply cargo fmt to pass Backend Checks CI

Wrap prompt_cache_key chained call across lines per rustfmt default
formatting. Pure formatting change, no behavior difference.

* fix(proxy/gemini): synthesize unique ids for no-id tool calls + enforce object params schema

P1 — Parallel tool calls without Gemini-assigned ids no longer collapse.
Gemini 2.x native parallel `functionCall` entries may omit the `id` field.
The previous `merge_tool_call_snapshots` fell back to matching by `name`,
which silently merged two parallel calls to the same function into one
entry — dropping the first call's args. The non-streaming path and shadow
store further bottlenecked on empty-string ids: multiple `tool_use` blocks
shared the same id, and `tool_name_by_id.get("")` could only return one
mapping, causing later `tool_result` round-trips to fail with
`Unable to resolve Gemini functionResponse.name` or bind to the wrong tool.

Fix: introduce `synthesize_tool_call_id()` producing `gemini_synth_<uuid>`.
Both streaming and non-streaming response paths now guarantee every
Anthropic-visible tool_use carries a unique id. `merge_tool_call_snapshots`
matches by id first, falling back to the `parts` array position (for the
cumulative-streaming case) while preserving the synthesized id across
chunks. `convert_message_content_to_parts` detects the synthetic prefix
and strips the id from outbound `functionCall`/`functionResponse` so the
internal identifier never leaks upstream. `shadow_parts` performs the
same strip when replaying a recorded assistant turn.

P2 — Vertex AI rejects empty `parameters` schemas. When an Anthropic tool
arrives with missing or empty `input_schema`, the proxy used to emit
`"parameters": {}` (no `type`), which fails Vertex AI validation with
`functionDeclaration parameters schema should be of type OBJECT`.
Contrary to the automated-review suggestion, the fix is not to omit
`parameters` (that too is rejected) but to normalize to the canonical
empty-object form `{type: "object", properties: {}}`.
Refs: google-gemini/generative-ai-python#423, BerriAI/litellm#5055.

Fix: new `ensure_object_schema` helper in `gemini_schema` promotes
missing `type` to `"object"` and adds empty `properties` when absent,
while leaving atomic (non-object) schemas untouched.

Tests: seven new regressions covering parallel no-id calls, cumulative
chunk id reuse, synthetic-id round-trip both directions, shadow replay
id stripping, and the three Vertex-AI schema shapes.

The two existing wrapper functions (`gemini_to_anthropic` and
`gemini_to_anthropic_with_shadow`) gain `#[allow(dead_code)]` to clear
a pre-existing clippy -D warnings failure — they are part of the public
transform API surface and intentionally kept for future callers.

Addresses Codex review P1/P2 on #1918.

* fix(proxy/gemini): narrow URL normalization + guard empty OAuth access_token

P2a — Preserve opaque relay URLs that contain `/v1/models/` prefixes.

`should_normalize_gemini_full_url` previously flagged any full URL whose
path merely contained `/v1beta/models/` or `/v1/models/` as a structured
Gemini endpoint, forcing rewrite to `.../v1beta/models/{model}:method`.
This silently dropped legitimate relay route segments (e.g.
`https://relay.example/v1/models/invoke` → `.../v1beta/models/...:generateContent`,
losing `/invoke`) and sent traffic to the wrong upstream path.

Replace the bare `contains(...)` checks with
`matches_structured_gemini_models_path`, which requires the
`/models/` segment to be followed by a canonical Gemini method call
(`*:generateContent` or `*:streamGenerateContent`). The
`matches_bare_gemini_models_path` helper is generalized (and renamed) to
handle both `/v1beta/models/` and `/v1/models/` alongside the original
bare `/models/` shape.

P2b — Reject empty Gemini OAuth access_tokens before they reach the
bearer header.

`GeminiAdapter::parse_oauth_credentials` accepts refresh-token-only JSON
(and surfaces `{"access_token": "", ...}` for expired credentials) with
`access_token` defaulting to `""`. The Claude adapter's GeminiCli branch
then called `AuthInfo::with_access_token(key, creds.access_token)`
unconditionally, so the bearer-header builder at
`AuthStrategy::GoogleOAuth` resolved to `Authorization: Bearer ` — a
deterministic 401 from upstream.

CC Switch does not currently exchange the refresh_token for a fresh
access_token (`OAuthCredentials::needs_refresh` / `can_refresh` are
annotated `#[allow(dead_code)]`). Until that exists, only attach
`access_token` when it is non-empty; fall back to plain GoogleOAuth
strategy with the raw key and log a warn pointing users at
`~/.gemini/oauth_creds.json` so the failure mode is observable.

Tests:
- gemini_url.rs: three new regressions — opaque `/v1/models/invoke`,
  opaque `/v1beta/models/route`, and the positive counter-case where a
  structured `/v1/models/...:generateContent` path still normalizes.
- claude.rs: three new `test_extract_auth_gemini_cli_*` tests covering
  refresh-only JSON, empty-string access_token JSON, and the valid-JSON
  pass-through.

All 839 lib tests pass; cargo fmt + clippy -D warnings clean.

Addresses Codex review P2 findings on #1918.

* fix(proxy/gemini): treat empty-string functionCall id as missing in streaming path

Follow-up to the earlier P1 fix: some Gemini relays serialize an absent
functionCall id as `"id": ""` instead of omitting the field. The
non-streaming `extract_tool_call_meta` already filters these via
`.filter(|s| !s.is_empty())`, but the streaming counterpart
`extract_tool_calls` passed the empty string straight through
`function_call.get("id").and_then(|v| v.as_str())` into
`GeminiToolCallMeta::new`, producing a `Some("")` id.

Downstream, `merge_tool_call_snapshots` would then match two parallel
no-id calls against each other on their shared empty-string id,
collapsing them into a single snapshot (silent data loss for the first
call) and emitting an Anthropic `tool_use.id: ""` that breaks tool_result
correlation on the Claude Code client.

Fix:
- `extract_tool_calls`: apply the same `filter(|s| !s.is_empty())` guard
  used in the non-streaming path so empty strings become `None` before
  reaching the shadow meta.
- `merge_tool_call_snapshots`: defensively collapse any incoming
  `Some("")` to `None` up front — keeps the "missing vs present" invariant
  local to the merge step for future callers that might build
  `GeminiToolCallMeta` by hand.

Tests (2 new, both in streaming_gemini):
- `parallel_empty_string_id_calls_are_treated_as_missing_and_preserved`
  covers two parallel calls with explicit `"id": ""` — asserts both
  surface, no empty tool_use id leaks, and each gets a unique
  `gemini_synth_` id.
- `single_empty_string_id_tool_call_gets_synthesized_id` covers the
  non-parallel degraded-relay case.

All 841 lib tests pass; cargo fmt + clippy -D warnings clean.

Addresses Codex follow-up P1 on #1918.

* fix(proxy/gemini): gate generic REST path suffixes behind Google host whitelist

`should_normalize_gemini_full_url` previously treated any full URL whose
path ends with `/v1`, `/v1/models`, `/models`, `/v1/openai`, or `/openai`
as a structured Gemini endpoint and rewrote it to
`/v1beta/models/{model}:generateContent`. These are ubiquitous REST
conventions — opaque relays such as `https://relay.example/custom/v1`
legitimately use them for fixed endpoints — so the rewrite silently
routed traffic to the wrong upstream path.

Split the predicate into two layers:

- **Unconditional**: `matches_structured_gemini_models_path` (i.e. a
  `/models/...:generateContent` method call anywhere in the path), the
  Google-specific `/v1beta*` family, and the deep OpenAI-compat paths
  (`/v1beta/openai/chat/completions`, `/openai/chat/completions`, and
  their `responses` siblings). These remain host-agnostic because the
  path grammar itself is Gemini-specific.
- **Google-host gated**: `/v1`, `/v1/models`, `/models`, `/v1/openai`,
  `/openai`. Only normalized when the host is one of
  `generativelanguage.googleapis.com`, `aiplatform.googleapis.com`, or a
  real `*-aiplatform.googleapis.com` Vertex regional endpoint. The match
  is exact/suffix (not `contains`), so lookalike hosts like
  `aiplatform.example.com` are correctly treated as opaque relays.

Tests (8 new in `gemini_url::tests`):
- Four opaque-relay cases: `/custom/v1`, `/custom/models`,
  `/custom/v1/models`, `/custom/openai` — all preserved as-is.
- Three Google-host counter-cases: `/v1`, `/models`, and
  `us-central1-aiplatform.googleapis.com/v1` still normalize.
- One lookalike safety case: `aiplatform.example.com/v1` is NOT
  treated as Google.

All 849 lib tests pass; cargo fmt + clippy -D warnings clean.

Addresses Codex review P2 on #1918.

* fix(proxy/gemini): align shadow id with client-visible id in non-streaming path

When Gemini returns a `functionCall` without an id (common in 2.x
parallel calls), `gemini_to_anthropic_with_shadow_and_hints` previously
generated TWO independent synthesized UUIDs:

  1. Line 186-197 — synthesized id `A` used for the Anthropic-visible
     `content[tool_use].id` returned to the client.
  2. Line 850-881 — `extract_tool_call_meta` independently synthesized
     id `B ≠ A`, which populated `shadow_turn.tool_calls[i].id`.

`shadow_content` (line 225-228, cloned from `rectified_parts`) retained
the original missing/empty id. Result: the client sees id `A`, the
shadow store holds id `B`.

On the next turn, `convert_messages_to_contents` builds
`tool_name_by_id` from `build_tool_name_map_from_shadow_turns`, which
uses `tool_calls[i].id` — so the map contains `B → name` but not
`A → name`. When the client sends back `tool_result(tool_use_id=A)`,
resolution fails with:

  Unable to resolve Gemini functionResponse.name for tool_use_id `A`

This affects both truncated histories (client sends only the
tool_result) and full histories (shadow-replay branch at line 342-354
skips `convert_message_content_to_parts`, so the assistant tool_use
block never registers id `A` itself).

Fix: make `rectified_parts` the single source of truth. After
`rectify_tool_call_parts`, run a pre-pass that writes
`synthesize_tool_call_id()` back into any `functionCall` that lacks a
non-empty id. All three readers — the content builder (186-197), the
shadow_content clone (225-228), and `extract_tool_call_meta` — then
observe the same id. `shadow_parts()` already strips synthesized ids on
replay (line 616-628), so the internal identifier never leaks to
Gemini upstream.

This mirrors the streaming path, which already has single-source-of-
truth semantics via `tool_call_snapshots` in `streaming_gemini.rs` —
no change needed there.

Tests (5 new in `transform_gemini::tests`):
- `non_stream_shadow_id_matches_client_visible_id`: asserts
  `response.content[0].id == shadow.tool_calls[0].id ==
  shadow.assistant_content.parts[0].functionCall.id`.
- `non_stream_missing_id_scenario_a_truncated_history_resolves`: turn 2
  sends only `[tool_result(id=A)]`; resolution must succeed.
- `non_stream_missing_id_scenario_b_full_history_replay_resolves`: turn 2
  sends `[assistant(tool_use=A), tool_result(A)]`; shadow-replay branch
  strips the synth id from outgoing `functionCall` while still
  resolving the subsequent `tool_result`.
- `non_stream_preserves_original_gemini_id_when_present`: regression —
  genuine Gemini ids flow through unchanged.
- `non_stream_synthesized_id_not_leaked_to_gemini_via_shadow_replay`:
  defensive — shadow-replay path must strip synth ids from both
  `functionCall.id` and `functionResponse.id`.

All 854 lib tests pass; cargo fmt + clippy -D warnings clean.

Addresses Codex follow-up P1 on #1918.

* refactor(proxy/gemini): share build_anthropic_usage between stream and non-stream paths

`streaming_gemini::anthropic_usage_from_gemini` and
`transform_gemini::build_anthropic_usage` were byte-for-byte identical
(32 lines each) — both converting Gemini `usageMetadata` into the
Anthropic `usage` shape including `cache_read_input_tokens` mapping.

Promote the non-streaming version to `pub(crate)` and reuse it from the
streaming SSE converter. Removes ~30 lines of duplication and guarantees
the two paths cannot drift apart.

No behavioral change; all 854 lib tests pass; cargo fmt + clippy -D
warnings clean.

* fix(proxy/gemini): gate /v1beta behind Google host + normalize models/ model id prefix

Two related P2 corrections to the Gemini Native URL surface, both
folding into the existing Google-host-whitelist architecture.

## P2a — `/v1beta` suffix should not unconditionally trigger rewrite

`should_normalize_gemini_full_url` placed `/v1beta` and `/v1beta/models`
in the unconditional layer on the reasoning that `/v1beta` is
Google-specific. In practice an opaque relay fronting a non-Gemini
service at `https://relay.example/custom/v1beta` would still be
silently rewritten to `/v1beta/models/{model}:generateContent`,
breaking the deployment.

Move `/v1beta`, `/v1beta/models`, and `/v1beta/openai` into the
Google-host gated layer alongside `/v1`, `/models`, and friends. The
unconditional layer now only accepts paths whose grammar is
intrinsically Gemini — `/models/...:generateContent` method calls and
the deep OpenAI-compat endpoints like `/openai/chat/completions` and
`/openai/responses`. Pasted AI-Studio URLs such as
`https://generativelanguage.googleapis.com/v1beta` still normalize
because the host matches the whitelist.

## P2b — `model: "models/gemini-2.5-pro"` produced doubled path prefix

Gemini SDKs (and the official `list_models` response) commonly surface
model ids in resource-name form `models/gemini-2.5-pro`. Raw
interpolation into `format!("/v1beta/models/{model}:...")` produced
`/v1beta/models/models/gemini-2.5-pro:streamGenerateContent` which
upstream rejects — yielding false-negative health checks for otherwise
valid provider configs.

Introduce `normalize_gemini_model_id(&str) -> &str` in `gemini_url`
as the single source of truth: strips an optional leading `/` then an
optional `models/` prefix, leaving bare ids untouched. Apply in the
three call sites that build a Gemini method URL:
- `services/stream_check.rs::resolve_claude_stream_url` (unified path)
- `services/stream_check.rs::check_gemini_stream` (Gemini-only path)
- `proxy/forwarder.rs::rewrite_claude_transform_endpoint` (production)

Tests (9 new):
- `gemini_url`: 3 regressions for opaque vs Google-host `/v1beta*`
  handling + 5 unit tests pinning `normalize_gemini_model_id` behavior
  (strip prefix, leave bare id, preserve nested slashes past the one
  stripped prefix, tolerate leading slash, pass through empty input).
- `stream_check`: one end-to-end regression confirming
  `models/gemini-2.5-pro` collapses to the expected single-prefix URL.
- `forwarder`: one end-to-end regression on the production rewrite
  path.

All 864 lib tests pass; cargo fmt + clippy -D warnings clean.

Addresses Codex P2 feedback on #1918.

* fix(proxy/gemini): trim API key before provider-type detection and OAuth parsing

Leading whitespace on a copied oauth_creds.json (e.g. trailing newline
when the user copies the file content as-is) would slip past the
`starts_with("ya29.") || starts_with('{')` prefix check in
`ClaudeAdapter::provider_type`, causing the provider to be misclassified
as raw-API-key Gemini and fall back to `x-goog-api-key` with the raw
JSON as the key — which upstream rejects with 401.

The frontend's `handleApiKeyChange` already trims on keystrokes but
deep-link imports, the JSON editor, and live-config backfill all bypass
that path. Trim at every backend extraction point so the coverage is
uniform:

- `ClaudeAdapter::extract_key` (5 env / fallback branches) gets
  `.map(str::trim)` before `.filter(|s| !s.is_empty())` so that
  whitespace-only values are also treated as missing.
- `GeminiAdapter::extract_key_raw` gets the same chain (including
  the `.filter` it was missing before).
- `GeminiAdapter::parse_oauth_credentials` gets a defensive
  `let key = key.trim();` at the entry as a belt-and-suspenders guard.

Adds two regression tests covering JSON and bare `ya29.` keys with
leading newline/space.

* fix(proxy/gemini): gate generic REST suffix stripping behind Google host in non-full-URL mode

`build_gemini_native_url` unconditionally stripped `/v1`, `/v1beta`,
`/models`, and `/openai` suffixes from the base path regardless of
host. This worked for Google's own endpoints but silently rewrote
third-party relay URLs like `https://relay.example/custom/v1` to
`.../custom/v1beta/models/...`, breaking any relay that mounts its
Gemini-compatible namespace under a versioned prefix.

The result was also asymmetric with the previously-fixed full-URL
branch: toggling the "full URL" switch changed the outbound URL for
the same base_url, which is exactly the kind of invisible behavior
that makes debugging proxy deployments painful.

Align `normalize_gemini_base_path` with
`should_normalize_gemini_full_url`'s layered model:

- Unconditional: `/models/...:method` structured paths and deep
  OpenAI-compat endpoints (`/openai/chat/completions`,
  `/openai/responses` and their versioned variants) — these are
  unambiguous Gemini-specific grammar on any host.
- Google-host gated: generic `/v1`, `/v1beta`, `/models`, `/openai`
  suffixes only get stripped on `generativelanguage.googleapis.com`,
  `aiplatform.googleapis.com`, or `*-aiplatform.googleapis.com`.
  Other hosts preserve the prefix verbatim so relays keep their
  intended routing.

Adds seven regression tests for the non-full-URL flow: opaque relay
preservation (v1 / v1beta / models / openai suffix variants), Google
host normalization (counter-case), and boundary cases (structured
method path and deep OpenAI-compat endpoint stripped regardless of
host).

Test count: 864 -> 873.

* Revert "fix(proxy/gemini): gate generic REST suffix stripping behind Google host in non-full-URL mode"

This reverts commit d19ff09cb7.

* test(proxy/gemini): pin non-full-URL versioned relay base stripping

Adds two regression tests that lock in the intentional asymmetry
between full-URL and non-full-URL modes:

- Full-URL mode: opaque base path (e.g. `https://relay.example/custom/v1beta`)
  is preserved verbatim. Already covered by
  `preserves_opaque_full_url_with_bare_v1beta_suffix`.
- Non-full-URL mode: base path MUST strip `/v1`, `/v1beta`, etc. so the
  standard `/v1beta/models/{model}:method` endpoint can be appended
  without producing a doubled `/v1beta/v1beta/models/...` path.

The non-full-URL contract is "base URL + cc-switch appends the
canonical Gemini endpoint". A user who needs a relay's custom
namespace (e.g. `/v1/models/...`) must use full-URL mode and paste
the complete method path. This commit adds regression coverage so a
future attempt to mirror full-URL's host-whitelist gating into
`normalize_gemini_base_path` will fail the test suite immediately.

* chore(lint): address clippy 1.95 findings in existing modules

CI upgraded to Rust 1.95 and flagged ten pre-existing warnings that
older toolchains did not enforce. None relate to the Gemini proxy
integration PR itself but they block CI on the feature branch, so
clean them up here as a separate commit for easy review:

collapsible_match:
- proxy/providers/gemini_schema.rs: `"items" if value.is_object()`
  match guard instead of nested if.
- proxy/providers/transform_responses.rs: fold
  `map_responses_stop_reason`'s `"completed"` / `"incomplete"` arms
  into match guards, relying on the existing `_ => "end_turn"` fall-
  through for non-matching guard conditions (semantics preserved).
- services/session_usage_codex.rs: fold
  `"session_meta" if state.session_id.is_none()` guard, relying on
  the existing `_ => {}` fall-through.

unnecessary_sort_by:
- services/provider/endpoints.rs: `sort_by_key(|ep| Reverse(ep.added_at))`.
- services/skill.rs (backup list): same Reverse idiom on `created_at`.
- services/skill.rs (skill listings x2): `sort_by_key(|s| s.name.to_lowercase())`.

useless_conversion:
- services/skill.rs: drop the explicit `.into_iter()` on `zip`'s argument.

while_let_loop:
- services/webdav_auto_sync.rs: `while let Some(wait_for) = ...`
  instead of `loop { let Some(...) = ... else { break }; ... }`.

All changes are mechanical and preserve behavior. `cargo test --lib`
remains green (868 passed).

* fix(proxy/gemini): reconcile synthesized tool-call ids with later real ids + preserve thoughtSignature

Three related findings on `streaming_gemini.rs` for Gemini's cumulative
`streamGenerateContent` stream, all centered on `merge_tool_call_snapshots`:

1. (P1) Match upgraded tool-call IDs by position.
   When Gemini delivers a `functionCall` without an id on chunk 1
   (cc-switch synthesizes `gemini_synth_*`) and then upgrades it to a
   real id on chunk 2, the `Some(incoming_id)` branch only matched by
   id and missed the existing synthesized snapshot. A second entry
   would be pushed, yielding duplicate `tool_use` content blocks at
   stream end — one with the synthesized id, one with the real id —
   which could trigger duplicate tool execution and break tool_result
   correlation. Add a positional fallback: when no id match exists but
   the same-position slot holds a synthesized id, merge into it.
   `or(preserved_id)` already lets the real id win the merge.

2. (P2) Preserve prior thoughtSignature when merging snapshots.
   `tool_call_snapshots[index] = tool_call` overwrote the slot
   entirely, dropping any `thoughtSignature` captured on an earlier
   chunk if the current cumulative snapshot omitted it. Since
   `build_shadow_assistant_parts` writes `thoughtSignature` into the
   shadow turn from `tool_call.thought_signature`, a dropped signature
   would cause later replay requests to Gemini to be rejected with
   invalid-signature errors. Preserve the existing signature when the
   incoming chunk does not carry one.

3. (P2) Document the part-order streaming trade-off.
   All `tool_use` content blocks are emitted after the final text
   `content_block_stop`, so interleaved [text, functionCall, text,
   functionCall] parts arrive at the Anthropic client as [text(concat),
   tool_use, tool_use] — different from the non-streaming transformer,
   which preserves part order. This is intentional given the cumulative
   snapshot model and the consumers we target (claude-code-like clients
   don't depend on strict interleaving for tool execution correctness).
   Add a block comment at the flush site describing the trade-off and
   what a strict-order fix would entail, so this isn't rediscovered as
   a bug later.

Regression tests:
- upgraded_real_id_merges_into_existing_synthesized_snapshot
- thought_signature_preserved_when_later_chunk_omits_it

Test count: 868 -> 870. clippy 1.95 clean. fmt clean.

* fix(proxy/gemini): prefer exact tool-call id over normalized-name fallback

The shadow-turn matcher used a three-branch `||` chain (id / full name /
normalized name). When two tools share a suffix (e.g. `server_a:search`
and `server_b:search`), the normalized-name clause could short-circuit
on an earlier turn whose id is actually wrong for the incoming tool_use,
mis-routing replay state (functionCall id / thoughtSignature) for later
tool_result resolution.

Split matching into two layers: when the incoming message carries any
tool_use ids, run id-based lookup first and return on the earliest hit.
Only fall back to full-name / normalized-name matching when the incoming
ids are absent or none of them resolve.

Add two regressions:

- shadow_replay_prefers_exact_id_match_over_normalized_name_collision
  Two shadow turns with colliding normalized names and two assistant
  messages whose ids cross the positional order; asserts each message
  replays the id-correct shadow turn (including thoughtSignature).

- shadow_replay_falls_back_to_name_when_ids_absent
  Shadow turn with no id and incoming tool_use with an empty id;
  asserts the name fallback still populates the replayed part.

---------

Co-authored-by: Jason <farion1231@gmail.com>
2026-04-16 22:42:49 +08:00
Dex Miller
de23216e49 feat(usage): refine usage dashboard UI and date range picker (#2002)
* feat(usage): enhance usage stats backend and query hooks

* feat(usage): redesign calendar date range picker with auto-switch and simplified layout

* refactor(usage): streamline dashboard layout and stats components

* refactor(usage): compact request log table with merged cache/multiplier columns and centered layout

* feat(i18n): add cache short labels and usage stats translations for zh/en/ja

* Align usage dashboard stats with range boundaries

The usage dashboard mixed second-precision detail rows with day-level rollups, which caused custom half-day ranges to overcount historical rollup data and left the request log paginator on stale pages after top-level filter changes.

This change limits rollups to fully covered local days, aligns multi-day trend buckets with natural local days, and resets request log pagination when the dashboard range or app filter changes.

Constraint: usage_daily_rollups stores only daily aggregates after pruning old detail rows
Rejected: Include partial boundary rollups proportionally | historical intra-day detail is unavailable after pruning
Rejected: Force RequestLogTable remount on range change | would discard local draft filters unnecessarily
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: Keep summary, trends, provider stats, and model stats on the same rollup-boundary rules
Tested: cargo test --manifest-path src-tauri/Cargo.toml usage_stats
Tested: pnpm exec vitest run tests/components/RequestLogTable.test.tsx
Tested: pnpm typecheck
Not-tested: Manual UI validation in the Tauri app

* Preserve full-day usage filters at minute precision

The latest review surfaced two interaction bugs in the usage dashboard: rollup-backed stats undercounted end days selected via the minute-precision picker, and immediate select changes accidentally applied unsubmitted text drafts from the request log filters.

This change treats 23:59 as a fully selected local end day for rollup inclusion and narrows select-side state syncing so app/status updates do not commit provider/model drafts.

Constraint: The custom range picker emits minute-precision timestamps, while rollups are stored at day granularity
Rejected: Require exact 23:59:59 end timestamps | unreachable from the current picker UI
Rejected: Rebuild applied filters from the full draft state on select changes | silently commits unsaved text input
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Keep request-log text fields on explicit apply semantics even when select filters remain immediate
Tested: cargo test --manifest-path src-tauri/Cargo.toml usage_stats
Tested: pnpm exec vitest run tests/components/RequestLogTable.test.tsx
Tested: pnpm typecheck
Not-tested: Manual Tauri dashboard interaction

* refactor(usage): move range presets into date picker, single-row layout

- UsageDateRangePicker: add preset shortcuts (今天/1d/7d/14d/30d) inside
  popover top; clicking a preset applies immediately and closes popover
- UsageDashboard: collapse to single row (app filters + refresh + picker);
  remove standalone preset buttons and summary stats bar
- RequestLogTable: replace static Calendar badge with interactive
  UsageDateRangePicker via onRangeChange prop; single filter row

* Keep usage pagination regression coverage aligned with the rendered UI

The new regression test was asserting a non-existent pagination label and page summary text, so it failed before it could verify the real page-reset behavior. This commit switches the assertions to the numbered pagination buttons that the component actually renders and validates the reset through the query hook arguments.

Constraint: RequestLogTable exposes numbered pagination buttons, not a "Next page" label or "2 / 6" summary text
Rejected: Add synthetic pagination labels solely for the test | would couple production markup to a test-only assumption
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Prefer pagination assertions that follow the rendered controls or hook inputs instead of invented summary text
Tested: pnpm vitest run tests/components/RequestLogTable.test.tsx; pnpm typecheck; pnpm test:unit

* refactor(usage): clean up dead code and polish date range picker

- Remove unused exports MAX_CUSTOM_USAGE_RANGE_SECONDS,
  timestampToLocalDatetime, and localDatetimeToTimestamp from
  usageRange.ts (replaced by the calendar picker)
- Deduplicate getPresetLabel from UsageDashboard and
  UsageDateRangePicker into shared getUsageRangePresetLabel helper
- Add aria-label, aria-current and aria-pressed to calendar day
  buttons so screen readers can disambiguate same-numbered days
  across adjacent months
- Drop unused cacheReadShort and cacheWriteShort i18n keys (zh/en/ja);
  the request log table renders R/W prefixes inline
- Align customRangeHint copy with the removed 30-day limit by
  dropping "up to 30 days" wording (zh/en/ja)

* fix(usage): align rollup cutoff to local midnight to keep days complete

`rollup_and_prune` previously used `Utc::now() - retain_days * 86400`
as the cutoff. Because rollups are bucketed by *local* date and detail
rows below the cutoff are pruned, an unaligned cutoff left the youngest
rolled-up day half-rolled-up and half-pruned. Combined with the new
`compute_rollup_date_bounds` boundary trimming (which excludes any
rollup day not fully covered by the requested range), custom range
queries that touch that day silently under-count summary, trend,
provider, and model stats.

Fix the invariant at the source: snap the cutoff to the next local
midnight after `(now - retain_days)`. Every rollup row now reflects a
complete local day, so the boundary trimmer's all-or-nothing assumption
holds.

Includes unit tests for the cutoff math (typical case + already-on-
midnight case). DST gap is handled defensively by bumping forward by
an hour.

Addresses Codex P2 review finding on PR #2002.

---------

Co-authored-by: Jason <farion1231@gmail.com>
2026-04-16 17:00:28 +08:00
Jason Young
507bf038a9 feat(stream-check): refresh default models and detect model-not-found errors (#2099)
* chore(stream-check): update default health check models to latest

Replaces deprecated gpt-5.1-codex@low with gpt-5.4@low and switches
the Gemini default from gemini-3-pro-preview to gemini-3-flash-preview
to pick the lightest variant of the latest series for fast, low-cost
health checks.

https://claude.ai/code/session_01NGWLchcTP76rJHjiP5Ehte

* feat(stream-check): detect model-not-found errors with dedicated toast

Health check previously classified failures purely by HTTP status code,
which meant deprecated/invalid models showed up as a generic "Not found
(404)" error pointing users to check the Base URL — misleading when the
URL is fine and only the test model is wrong (e.g. gpt-5.1-codex after
it was retired).

Backend: add detect_error_category() that inspects 4xx response bodies
for model-not-found indicators (model_not_found, does not exist,
invalid model, not_found_error, etc.) and returns a "modelNotFound"
category. Thread the resolved test model through build_stream_check_result
so the failed result carries it in model_used. Add StreamCheckResult
.error_category field (serde-skipped when None).

Frontend: useStreamCheck branches on errorCategory === "modelNotFound"
before the HTTP-status fallback and renders a toast.error with the model
name and a description pointing to Model Test Config. Add i18n keys
(modelNotFound / modelNotFoundHint) for zh/en/ja.

Tests: unit-test detect_error_category against real OpenAI/Anthropic
error shapes, 5xx false-positive avoidance, and plain 401 auth errors.

https://claude.ai/code/session_01NGWLchcTP76rJHjiP5Ehte

* fix(stream-check): add missing error_category field in fallback

The error_category field was added to StreamCheckResult in this branch
but the fallback constructor in stream_check_all_providers was not
updated, which broke cargo build.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-15 15:25:32 +08:00
Dex Miller
ef41e4da46 fix(proxy): strip hop-by-hop response headers per RFC 7230 (#2060) 2026-04-15 11:28:56 +08:00
wwminger
78198e262b fix(opencode): use json5 parser for trailing comma tolerance (#2023)
* fix(opencode): use json5 parser for trailing comma tolerance

OpenCode CLI writes opencode.json with trailing commas (valid JSONC),
but CC Switch parsed it with serde_json (strict JSON), causing errors
like 'trailing comma at line 35 column 3'.

Switch to json5::from_str which accepts both JSON and JSONC. The json5
crate is already a project dependency. Change error type from
AppError::json() to AppError::Config() since json5::Error differs from
serde_json::Error.

* style(opencode): apply rustfmt to satisfy cargo fmt --check

The previous commit's .map_err(...) chain exceeded rustfmt's default
100-char max_width, breaking CI's `cargo fmt --check`. Let rustfmt
wrap the closure body as a multi-line block. No behavior change.

---------

Co-authored-by: 18067889926 <ming.flute@outlook.com>
Co-authored-by: Jason <farion1231@gmail.com>
2026-04-15 11:11:48 +08:00
Jason
79eb773195 fix: remove unused mut to pass clippy -D warnings 2026-04-15 09:16:22 +08:00
Jason
6092a87b40 fix: preserve env vars when saving Google Official Gemini provider (#2087)
write_gemini_live() unconditionally cleared env_map for GoogleOfficial
auth type, discarding user-configured env vars (e.g. GEMINI_MODEL).
Remove the env_map.clear() call so the user's settings_config.env is
written as-is, and merge identical Packycode/Generic match arms.
2026-04-15 09:05:45 +08:00
Jason
689ca08409 feat: classify stream check errors with color-coded toasts
Distinguish between "provider rejects probe" (yellow warning) and
"genuinely broken" (red error) in health check results.

Backend: add AppError::HttpStatus variant to carry structured HTTP
status codes, populate http_status on error results, classify codes
into short labels (e.g. "Auth rejected (401)"), and truncate overly
long response bodies.

Frontend: route 401/403/400/429/5xx to toast.warning with localized
hints explaining the error may not indicate actual unusability; route
404/402/connection errors to toast.error. Add i18n keys for all three
locales (zh/en/ja).

Also deduplicate check_once by reusing build_stream_check_result.
2026-04-14 17:11:13 +08:00
Jason
04508801ef fix: handle root-level skill repos during installation
When a repo itself is a single skill (SKILL.md at repo root), the
discovery phase sets directory to the repo name, but after ZIP
extraction (which strips the root folder), no matching subdirectory
exists. Add a fallback to check if SKILL.md exists directly in the
extracted temp directory before reporting SKILL_DIR_NOT_FOUND.

Fixes installation of repos like zlbigger/Google-SEOs.skill.
2026-04-14 15:56:31 +08:00
Jason
4a0b5c3dec refactor: remove per-provider proxy config feature
The per-provider proxy configuration (meta.proxyConfig) is removed
because its scope is too narrow and covered by global proxy settings
and proxy takeover mode. Users can achieve the same result via the
global proxy panel.

Changes:
- Remove ProviderProxyConfig type (frontend TS + backend Rust)
- Remove ProviderAdvancedConfig proxy UI block, keep testConfig/pricingConfig
- Simplify http_client: delete build_proxy_url_from_config,
  build_client_for_provider, get_for_provider
- Simplify forwarder/stream_check/model_fetch to use global client
- Remove i18n keys (en/zh/ja)
- Fix pre-existing test bug in transform.rs (extra None arg)
2026-04-14 14:26:55 +08:00
Jason
d13a8d7353 fix(clippy): remove redundant closure in session ID parsing
Replace `|uid| parse_session_from_user_id(uid)` with direct
function reference to satisfy clippy::redundant_closure.
2026-04-14 10:34:58 +08:00
Jason
0739b60341 fix(proxy): reduce unnecessary Copilot premium interaction consumption
- Fix request classification: treat messages containing tool_result as
  agent continuation instead of user-initiated, preventing false premium
  charges on every tool call
- Add subagent detection via __SUBAGENT_MARKER__ and metadata._agent_
  fallback, setting x-interaction-type=conversation-subagent
- Add deterministic x-interaction-id derived from session ID to group
  requests into a single billing interaction
- Add orphan tool_result sanitization to prevent upstream API errors
  that could cause retries and duplicate billing
- Reorder pipeline: classify (on original body) → sanitize → merge →
  warmup, ensuring classification sees raw tool_result semantics
- Enable warmup downgrade by default with gpt-5-mini model
- Enhance session ID extraction priority chain for Copilot cache keys
- Detect infinite whitespace bug in streaming tool call arguments
2026-04-14 10:34:58 +08:00
Jason
c01338ac33 fix(usage): remove unnecessary private IP restrictions from usage script
SSRF protection (private IP blocking, suspicious hostname detection) was
originally added for web-server threat models but is unnecessary for a
local desktop app where the user already has full network access. This
removal unblocks legitimate use cases like enterprise intranet APIs,
Docker container addresses, and self-hosted services.

Retained: HTTPS enforcement and same-origin checks which still provide
meaningful security (protecting API keys in transit and preventing
scripts from leaking keys to unrelated domains).
2026-04-14 10:33:41 +08:00
Jason
420f4c8c23 fix(sessions): strip OpenClaw message_id suffix and allow 2-line titles
OpenClaw gateway injects `[message_id: UUID]` metadata at the end of
every message, wasting display space. Strip this suffix from both title
and summary fields.

Also change session title display from single-line truncate to
line-clamp-2, so longer titles (e.g. OpenClaw's timestamp-prefixed
messages) can show more meaningful content across two lines.
2026-04-14 10:33:41 +08:00