Fix openrouter model prefix handling#2542
Fix openrouter model prefix handling#2542liangzhang-keepmoving wants to merge 2 commits intoopenai:mainfrom
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7246de29cc
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
src/agents/models/multi_provider.py
Outdated
| return provider.get_model(model_name) | ||
| elif prefix: | ||
| # For unknown prefixes, pass the full model name (including prefix) to LiteLLM | ||
| return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}") |
There was a problem hiding this comment.
Keep
litellm/ as routing prefix only
This branch now prepends the prefix for every prefixed model, including litellm/..., so MultiProvider.get_model("litellm/anthropic/claude-...") forwards litellm/anthropic/claude-... into LitellmProvider instead of the previous anthropic/claude-.... Because LitellmProvider.get_model passes that string through unchanged, this regresses the documented litellm/... flow and can break existing non-OpenAI model calls that relied on the old stripping behavior. The prefix re-attachment should apply only to unknown prefixes (like openrouter), not the built-in litellm route.
Useful? React with 👍 / 👎.
|
@codex review I've addressed the feedback you provided:
The fix now properly handles both openrouter models (by passing the full model name including prefix to LiteLLM) and maintains the existing behavior for litellm models (by stripping the prefix before passing to LiteLLMProvider). |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 16cedc1384
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if prefix == "litellm": | ||
| return self._get_fallback_provider(prefix).get_model(model_name) | ||
| else: | ||
| return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}") |
There was a problem hiding this comment.
Keep OpenAI-prefixed models stripped before provider lookup
The new fallback path reattaches every non-litellm prefix before calling the fallback provider, so MultiProvider().get_model("openai/gpt-4o") now passes "openai/gpt-4o" into OpenAIProvider instead of the previously stripped "gpt-4o". This regresses the documented openai/... behavior and can produce invalid model names at runtime for users who explicitly prefix OpenAI models.
Useful? React with 👍 / 👎.
|
@codex review Thank you for the feedback! I've already addressed this issue in the latest commit (16cedc1). The fix adds a special case in the elif prefix:
# For unknown prefixes, pass the full model name (including prefix) to LiteLLM
# Except for litellm prefix, where we should just pass the model name (without prefix)
if prefix == "litellm":
return self._get_fallback_provider(prefix).get_model(model_name)
else:
return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}")This ensures that:
The fix now correctly handles both use cases while preserving the documented litellm/... flow. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 16cedc1384
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if prefix == "litellm": | ||
| return self._get_fallback_provider(prefix).get_model(model_name) | ||
| else: | ||
| return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}") |
There was a problem hiding this comment.
Keep
openai/ models stripped before OpenAIProvider
The new elif prefix fallback path re-adds the prefix for every non-litellm value, so default MultiProvider calls like get_model("openai/gpt-4.1") now become OpenAIProvider.get_model("openai/gpt-4.1") instead of "gpt-4.1". In the common case where provider_map is unset (including responses_websocket_session), this breaks the documented openai/... routing and can send invalid model IDs to OpenAI.
Useful? React with 👍 / 👎.
|
@codex review Thank you for the feedback! I've addressed this issue in the latest commit (0066b352). The fix now handles "openai" prefix the same way as "litellm" prefix: elif prefix:
# For known prefixes (openai, litellm), pass just the model name (without prefix)
# For unknown prefixes (like openrouter), pass the full model name (including prefix) to LiteLLM
if prefix == "litellm" or prefix == "openai":
return self._get_fallback_provider(prefix).get_model(model_name)
else:
return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}")This ensures that:
The fix now correctly handles all three cases while preserving the documented behavior for both openai/... and litellm/... flows. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 16cedc1384
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if prefix == "litellm": | ||
| return self._get_fallback_provider(prefix).get_model(model_name) | ||
| else: | ||
| return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}") |
There was a problem hiding this comment.
Preserve openai prefix stripping in fallback routing
When model_name is passed as openai/... and there is no explicit provider_map entry for openai (the default path), this branch re-adds the prefix and calls OpenAIProvider.get_model("openai/<model>") instead of <model>. Because _get_fallback_provider("openai") resolves to OpenAIProvider, this regresses the documented openai/<model> behavior and can send invalid model IDs to OpenAI in flows that rely on explicit openai/ prefixes.
Useful? React with 👍 / 👎.
|
@openai/openai-agents-python-maintainers Could you please review this PR when you have a chance? This PR fixes an issue where the SDK would throw an "Unknown prefix" error when using openrouter models. The fix:
This is an important fix that allows users to seamlessly use openrouter models through the multi-provider system, alongside existing openai and litellm models. The fix has already addressed all feedback from Codex review, ensuring that:
Thank you for your time! |
|
This seems to be duplicated of #2541 |
This fix addresses the issue with openrouter model prefix handling by:
_create_fallback_providerto return LitellmProvider for all prefixesget_modelto pass the full model name (including prefix) to LiteLLM for unknown prefixesThis allows the framework to properly handle models with openrouter prefixes like
openrouter/google/gemma-7b-itby passing them directly to LiteLLM, which can handle various model provider prefixes.