Prevents `git fetch origin pull/<N>/head:main` failures on fork PRs
whose head branch is also named `main` — using a SHA puts the runner
in detached HEAD so the main ref is free for the action's internal fetch.
Now that the view transition animation is gone, setTheme no longer
needs click coordinates. Reduce the API surface to (theme: Theme) =>
void and simplify the call sites in mode-toggle and ThemeSettings.
The View Transitions API used here crashes WebKitGTK with SIGSEGV on
Linux. Rather than gating document.startViewTransition per platform
(see PR #2502), drop the animation entirely — it's a low-value visual
flourish on a low-frequency action that doesn't justify a permanent
platform branch.
Removes the ::view-transition-* CSS block and the coordinate plumbing
in setTheme. The optional event parameter is kept on the API surface
to keep call sites compiling; they'll be cleaned up in a follow-up
commit.
Cap was too tight — review tasks need 8-15 turns to read files, analyze,
and post results. The first run after the previous prompt change failed
with error_max_turns at turn 6. The disallowedTools list already keeps
the agent in read-only mode, so an explicit turn cap is redundant.
Previous prompt produced 5-10 findings per PR including dead-code/style
nits mixed with real bugs. New prompt adds severity tiering, an 80
confidence threshold, an explicit anti-pattern list, a 5-nit cap, and
permits zero-comment LGTM as a valid outcome. Calibration follows
Anthropic Code Review docs and the claude-plugins-official prompt.
- Restrict to repo collaborators via author_association check
- Use OAuth token for subscription billing
- Disable Edit/Write tools and set contents:read permission to enforce review-only
Anthropic SDK assigns distinct semantics to the two env vars:
- ANTHROPIC_API_KEY -> x-api-key
- ANTHROPIC_AUTH_TOKEN -> Authorization: Bearer
The Claude adapter previously collapsed both into AuthStrategy::Anthropic
and then emitted Authorization: Bearer regardless, breaking strict
Anthropic-protocol endpoints (Anthropic official, Cloudflare AI Gateway,
OpenCode Go, DashScope) and silently overriding the user's intended auth
scheme.
- claude::extract_auth: infer strategy from env var name
(ANTHROPIC_AUTH_TOKEN -> ClaudeAuth, ANTHROPIC_API_KEY -> Anthropic),
matching the precedence already used by extract_key.
- claude::get_auth_headers: split the Anthropic arm so it emits
x-api-key, while ClaudeAuth and Bearer continue to use Bearer.
- stream_check: reuse ClaudeAdapter::get_auth_headers as the single
source of truth, replacing the prior "always Bearer + maybe x-api-key"
double injection that produced auth conflicts and false-negative
health checks.
- Cover each strategy -> header mapping and env-var precedence with
new unit tests in claude.rs.
Refs #2368, #2380
Claude Code injects a dynamic `x-anthropic-billing-header` line at the
start of `system` content. Its rotating `cch=` token was forwarded into
OpenAI Responses `instructions` and Chat system messages, which broke
upstream prefix prompt cache reuse — a stable ~95k-token prefix was
getting re-charged on every request.
Strip only the leading occurrence in both anthropic_to_openai and
anthropic_to_responses; later occurrences are preserved so user-authored
prompt text containing the same string is not lost.
Hermes aggregates all in-process API calls into a single sessions row
with the `model` field locked to the initial model, so the usage
dashboard cannot cleanly surface per-call billing context. Two rounds
of UI workarounds (raw mapping, then `<model> @ <host>` display) did
not resolve the user-facing confusion, so the whole tracking
integration is dropped for now.
Removes session_usage_hermes service (and its 17 tests), sync wiring
in commands/usage.rs and lib.rs, _hermes_session/hermes_session
entries in usage_stats SQL (provider_name_coalesce CASE and
effective_usage_log_filter IN clause), frontend Tab/banner/dropdown/
icon entries, and four i18n keys per locale.
Hermes app integration outside usage tracking (proxy routing,
session manager, config) is preserved. Pre-existing hermes rows in
proxy_request_logs are left as orphans — filtered out by the
updated SQL and never surfaced in the UI.
Zhipu's `data.limits[]` returns 1 entry for legacy plans (subscribed
before 2026-02-12) and 2 entries for current plans. Previously every
TOKENS_LIMIT entry was hardcoded as `five_hour`, so the weekly bucket
was rendered with the 5-hour i18n label.
Sort TOKENS_LIMIT entries by nextResetTime ascending and assign
`five_hour` to index 0, `weekly_limit` to index 1. Legacy plans
naturally degrade to a single five_hour tier.
Also harden the parser: case-insensitive type match (defends against
upstream casing changes), reuse TIER_FIVE_HOUR/TIER_WEEKLY_LIMIT
constants, and add 8 unit tests covering both plan shapes plus
defensive edge cases.
* fix(dashscope): enhance usage parsing robustness to prevent VSCode crashes
Enhanced build_anthropic_usage_from_responses() to handle null, missing, empty,
and partial usage fields gracefully. This prevents VSCode Extension crashes with
"Cannot read properties of null (reading 'output_tokens')" when connecting to
DashScope (Alibaba Cloud Bailian) models.
Changes:
- Added defensive null checks and empty object detection
- Implemented OpenAI field name fallbacks (prompt_tokens/completion_tokens)
- Added comprehensive logging for malformed usage scenarios
- Fixed streaming SSE event handlers with null-safe usage access
- Preserved cache token fields even when input/output tokens are missing
This ensures the proxy never crashes on malformed Responses API usage objects,
returning valid Anthropic-compatible usage structures (input_tokens/output_tokens)
in all cases.
* fix(proxy): tighten Responses API usage fix per review
- Drop redundant fallback in streaming.rs Chat Completions path; the
existing if-let-Some guard already prevents usage:null, so the extra
layer was dead code and caused a fmt-breaking indentation issue.
- Demote partial-usage warn to debug. Streaming chunks legitimately
arrive with partial token counts and the warn-level log was noisy.
- Rewrite CHANGELOG entry: reference #2422, broaden scope from
DashScope-only to all api_format=openai_responses users (Codex OAuth
is the strongest signal; DashScope compatible-mode/v1/responses is
the original report).
- cargo fmt to clear 12 formatting differences vs main.
---------
Co-authored-by: Jason <farion1231@gmail.com>
* fix(config): sort JSON keys alphabetically for deterministic output
Ensures settings.json keys are written in sorted order, preventing
non-deterministic git diffs when switching configs.
* test(config): add unit tests for sort_json_keys and fix formatting
Cover top-level sort, nested recursion, array order preservation,
primitive pass-through, empty collections, and the core determinism
guarantee (different insertion orders must yield identical output).
Also fix line-length in write_json_file flagged by `cargo fmt --check`.
---------
Co-authored-by: Jason <farion1231@gmail.com>
* Keep Codex history stable across provider switches
* Restore template Codex provider id when backfilling live config
Backfill writes the current Codex live config back to the previous
provider's stored template after a switch. Because the live file now
carries a normalized stable model_provider id, the previous provider's
template would lose its own provider-specific id (and any matching
[profiles.*] references) on every subsequent switch.
Reverse the normalization at backfill time by rewriting model_provider,
the active model_providers section, and matching profile references back
to the template's original id.
---------
Co-authored-by: Jason <farion1231@gmail.com>
Hermes:
- Parse ~/.hermes/state.db sessions (incl. profiles/*/state.db) into
proxy_request_logs with data_source='hermes_session', WAL-aware
incremental sync, Hermes-reported cost preferred over model_pricing
fallback
Zero-cost bug (dashboard showed \$0 totals):
- GPT-5.5 family default pricing (~83% of affected rows used GPT-5.5)
- find_model_pricing_row: ASCII-lowercase normalization so
"OpenAI/GPT-5.5@HIGH" matches seeded "gpt-5.5"
- Startup cost backfill in async task: scan rows where total_cost <= 0
but tokens > 0, recompute via model_pricing in a single transaction
Performance:
- Add (app_type, created_at DESC) covering index for dashboard range
queries
- Add expression index on COALESCE(data_source, 'proxy') so dedup EXISTS
subqueries use index lookup instead of full scan; drop superseded
idx_request_logs_dedup_lookup
Refactor:
- row_to_request_log_detail helper (3-way de-dup; fixes cost_multiplier
\"1\" vs \"1.0\" drift between callers)
- Promote get_sync_state/update_sync_state to shared session_usage
module (4 copies -> 1)
- run_step helper in lib.rs replaces 9 if-let-Err blocks
- maybe_backfill_log_costs returns bool to skip duplicate total_cost
parsing in caller
Proxy writes and session-log sync wrote to proxy_request_logs with
mismatched request_ids: only Claude on a native Anthropic backend used the
shared `session:{message_id}` key. Codex/Gemini and Claude-through-OpenAI
providers always produced distinct ids, so primary-key dedup never fired
and every real request was recorded twice.
Adds a 7-dim fingerprint dedup (app_type, 4 token counts, 2xx status,
model with case-insensitive match, ±10min window) wired into three layers:
- Write path: should_skip_session_insert() blocks duplicate session rows
before INSERT, unifying the previously-divergent Claude/Codex/Gemini
paths through a single DedupKey-based helper.
- Read path: effective_usage_log_filter() excludes already-covered session
rows from every aggregation query.
- Rollup path: same filter applied so usage_daily_rollups never absorbs
duplicates.
Also adds a covering index (idx_request_logs_dedup_lookup) so the EXISTS
subquery stays index-only, and a transform.rs regression test that pins
openai_to_anthropic id preservation - the missing piece that lets
Claude+OpenAI-compatible providers reuse the session: id scheme.
Sync the preset's websiteUrl from the legacy /coding/docs/ path to
the current /code/docs/ path across all four app presets (claude,
hermes, openclaw, opencode).
The query_siliconflow function received an is_cn flag that only switched
the request domain (.cn vs .com) but the response builder hardcoded
unit="CNY" for both sites. International users at api.siliconflow.com
saw their USD balance labelled as CNY. Now unit and plan_name follow
is_cn, so the EN site shows USD and "SiliconFlow (EN)".
Codex models no longer accept model_context_window=1000000, so the
toggle and its paired auto-compact-limit input are commented out in
the provider edit form. State hooks, helper imports, and i18n keys
are preserved so the UI can be restored in one batch if upstream
support returns. The TOML editor remains visible, allowing manual
edits if needed.
- Deduplicate repeated upstream `finish_reason` chunks so only one Anthropic `message_delta` is emitted.
- Preserve late `choices: []` usage-only chunks before sending the final `message_delta`.
- Keep stream error paths from emitting successful terminal events.
- Add regressions for duplicate finish reasons, usage-only chunks, missing `[DONE]`, and truncated streams.
* feat(provider-form): soften business-rule validation with "save anyway" prompt
Refactor handleSubmit so empty-field / missing-item validations (provider
name, endpoint, API key, opencode model, template variables, provider key
required) no longer hard-reject with toast.error. Instead they are collected
into an issues list and presented via a ConfirmDialog; the user can cancel
or choose "Save anyway" to proceed.
Integrity constraints stay as hard rejections:
- providerKey regex / duplicate (would corrupt other providers)
- Copilot / Codex OAuth not authenticated (no token, cannot establish)
- omo Other Fields JSON not an object / parse failure
This aligns the frontend with the backend's existing "relaxed save / strict
switch" split (see gemini_config.rs: validate_gemini_settings vs
validate_gemini_settings_strict) and unblocks legitimate configs such as
AWS Bedrock, Vertex AI, and custom Gemini base URLs that the UI previously
refused to save.
Refs: #2196, #1204
* fix(provider-form): address review feedback on soft-validation
P1: move empty providerKey back to hard rejection for OpenCode / OpenClaw /
Hermes. Since providerKey is the primary identity for these apps and the
mutations layer throws "Provider key is required" when absent, letting users
click "save anyway" would surface a generic error toast instead of a
precise, actionable one. Treat empty providerKey as an integrity constraint
alongside regex / duplicate checks.
P2: give the soft-confirm submit path its own submitting state. The
confirm-dialog path bypassed react-hook-form's isSubmitting lifecycle, so
slow or failing saves left the outer submit button responsive and could
spawn unhandled rejections. Now the confirm handler awaits performSubmit
inside try/catch/finally, uses an isConfirmSubmitting flag to gate both
confirm and cancel clicks, and folds the flag into the outer disabled
state and onSubmittingChange callback.
Refs: #2307 review comments
* chore(clippy): use push for single char '…' in truncate_body
Clippy 1.95 added single_char_add_str which flagged the push_str("…")
in truncate_body. Rebased onto latest upstream/main and applied the
suggested fix so the Backend Checks clippy job passes.
Unrelated to this PR's core changes; bundled in so the PR is mergeable
without waiting for a separate upstream fix.
---------
Co-authored-by: Allen <allen@AllenMacBook-M4-Pro.local>
Introduce a dedicated "Compshare Coding Plan" variant pointing to
https://cp.compshare.cn (with /v1 for OpenAI-compatible apps). Reuses
the existing ucloud icon and promotion copy, while adding a new
providerForm.presets.ucloudCoding key in zh/en/ja.
The $2 sign-up credit is no longer granted automatically — users now
need to contact Crazyrouter's customer support after registering to
claim it. Sync all three README variants and drop the outdated
"instantly / 即時進呈" wording. Also fix a stray double space in the
English sentence.
Update the Compshare entry to describe the new offering: per-use
domestic-model Coding Plan packages with officially-relayed overseas
models, replacing the outdated "60-80% off" discount wording. Keep all
three README variants in sync.
The original lioncc.png had a transparent background, which made the
black "LionCC" wordmark blend into GitHub's dark-mode page and become
hard to read. Composite the artwork onto solid white so the logo stays
legible in every theme without changing any markup.
DeepSeek released V4 flash/pro; legacy IDs deepseek-chat / deepseek-reasoner
now alias to deepseek-v4-flash and will be deprecated.
- Update claude/hermes/opencode/openclaw presets to v4-pro / v4-flash,
context 128K -> 1M; Claude Anthropic-compat endpoint routes OPUS/SONNET
to v4-pro and HAIKU to v4-flash, plus an explicit modelsUrl override.
- Seed deepseek-v4-flash ($0.14/$0.28 per 1M) and deepseek-v4-pro
($1.68/$3.36 per 1M) into model_pricing; older v3.x / chat / reasoner
rows kept for historical usage stats (INSERT OR IGNORE).
- Refresh user-manual (zh/en/ja) pricing table and note that legacy model
IDs are billed at v4-flash rates.
Providers like DeepSeek, Kimi, Zhipu GLM and MiniMax expose the
Anthropic-compatible API on a subpath (e.g. /anthropic) while the
OpenAI-style /models endpoint lives at the API root. The previous
heuristic blindly appended /v1/models to the Base URL, so every such
provider returned 404 and the UI mislabeled it as "provider does not
support fetching models".
Backend now generates a candidate list and tries them in order:
preset override -> baseURL /v1/models -> stripped-subpath /v1/models ->
stripped-subpath /models. Non-404/405 responses (auth, network) stop
immediately so we never retry against hostile status codes. Known
compat suffixes are kept in a length-descending constant so the
longest match wins; response bodies are truncated to 512 chars to
avoid HTML 404 pages bloating the error string.
Preset type gains an optional modelsUrl (DeepSeek points at
https://api.deepseek.com/models). Frontend threads the override
through fetchModelsForConfig when the current Base URL still matches
the preset default. A new fetchModelsEndpointNotFound i18n key
replaces the misleading "not supported" toast for exhausted-candidate
and 404/405 cases (zh/en/ja).
Copilot upstream returns model_not_supported when the client sends
dash-form Claude IDs (claude-sonnet-4-6, claude-sonnet-4-6[1m]) while
/models only accepts dot form (claude-sonnet-4.6, -1m suffix).
- Add copilot_model_map: syntax normalize (dash->dot, [1m]->-1m) plus
live /models exact match and family-version fallback, reusing the
existing 5 min auth cache. Returns None when the whole family is
absent so upstream surfaces an explicit error instead of silently
switching families.
- Wire into forwarder Copilot hook; runs before anthropic_to_openai
conversion.
- Default Opus slot in the Copilot preset maps to Sonnet 4.6: Pro
dropped all Opus on 2026-04-20 and Pro+ bills Opus 4.7 at 7.5x.
Users who want real Opus can switch manually in the UI.
Refs: https://github.com/farion1231/cc-switch/issues/2016