fix(ai-adapters): surface provider code 1305 in OpenAI stream errors#557
Draft
cooleryu wants to merge 1 commit intoGCWing:mainfrom
Draft
fix(ai-adapters): surface provider code 1305 in OpenAI stream errors#557cooleryu wants to merge 1 commit intoGCWing:mainfrom
cooleryu wants to merge 1 commit intoGCWing:mainfrom
Conversation
wsp1911
requested changes
Apr 27, 2026
Collaborator
There was a problem hiding this comment.
需要将provider_error_with_code.sse内容改为
data: {"error":{"code":"1305","message":"provider temporarily overloaded"},"request_id":"req_1305"}
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Preserve provider error metadata from OpenAI-compatible SSE error payloads.
When a provider sends an SSE data payload like:
{"error":{"code":"1305","message":"provider temporarily overloaded"},"request_id":"req_1305"}BitFun already recognizes it as an API error before deserializing it as a normal chat chunk. This follow-up keeps the provider
codeandrequest_idin the surfaced error message instead of reducing the error to message-only text.Motivation
Issue #380 shows a provider overload response where the useful debugging fields were:
codemessagerequest_idKeeping those fields visible makes model overload, quota, gateway, and provider-side failures easier to diagnose from BitFun's runtime errors and raw SSE logs.
Changes
error.codein OpenAI-compatible SSE API error messages when present.request_id/requestIdwhen present.errorpayloads.code + message + request_idshape.bitfun-corestream fixture test so existing CI covers the end-to-end StreamProcessor error path.Test Plan
Result: passed.
Attempted focused Rust tests:
Result: not run locally because this machine does not currently have
cargo/rustcinstalled. The new core integration test is undersrc/crates/core/tests, which is covered by the existing Linux CI job:cargo test --locked -p bitfun-coreRisk
Low. This only changes formatting for OpenAI-compatible stream error payloads that already contain an
errorobject. Normal chat completion chunks, tool-call chunks, and message-only error payloads keep their current behavior.Related Issue
Refs #380.