Skip to content

Fix openrouter model prefix handling#2542

Closed
liangzhang-keepmoving wants to merge 2 commits intoopenai:mainfrom
liangzhang-keepmoving:fix/openrouter-model-prefix
Closed

Fix openrouter model prefix handling#2542
liangzhang-keepmoving wants to merge 2 commits intoopenai:mainfrom
liangzhang-keepmoving:fix/openrouter-model-prefix

Conversation

@liangzhang-keepmoving
Copy link

This fix addresses the issue with openrouter model prefix handling by:

  1. Modifying _create_fallback_provider to return LitellmProvider for all prefixes
  2. Updating get_model to pass the full model name (including prefix) to LiteLLM for unknown prefixes

This allows the framework to properly handle models with openrouter prefixes like openrouter/google/gemma-7b-it by passing them directly to LiteLLM, which can handle various model provider prefixes.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7246de29cc

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

return provider.get_model(model_name)
elif prefix:
# For unknown prefixes, pass the full model name (including prefix) to LiteLLM
return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep litellm/ as routing prefix only

This branch now prepends the prefix for every prefixed model, including litellm/..., so MultiProvider.get_model("litellm/anthropic/claude-...") forwards litellm/anthropic/claude-... into LitellmProvider instead of the previous anthropic/claude-.... Because LitellmProvider.get_model passes that string through unchanged, this regresses the documented litellm/... flow and can break existing non-OpenAI model calls that relied on the old stripping behavior. The prefix re-attachment should apply only to unknown prefixes (like openrouter), not the built-in litellm route.

Useful? React with 👍 / 👎.

@liangzhang-keepmoving
Copy link
Author

@codex review

I've addressed the feedback you provided:

  1. Preserve litellm prefix stripping behavior: Fixed by adding a special case in the get_model method to handle "litellm" prefix differently - it now passes just the model name without re-adding the prefix, maintaining the existing routing semantics.

  2. Keep MultiProvider defaulting to Responses API: The code already preserves this behavior since openai_use_responses and openai_use_responses_websocket are optional parameters with None as default, which allows OpenAIProvider to use its default behavior.

  3. Restore functional MultiProviderMap compatibility: MultiProviderMap is already implemented as a separate class with all the required methods (add_provider, remove_provider, get_mapping, etc.), so no changes were needed here.

  4. Close providers sourced from provider_map in aclose: The aclose method already includes providers from provider_map and uses a seen set to avoid duplicates, so this is already correctly implemented.

The fix now properly handles both openrouter models (by passing the full model name including prefix to LiteLLM) and maintains the existing behavior for litellm models (by stripping the prefix before passing to LiteLLMProvider).

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 16cedc1384

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

if prefix == "litellm":
return self._get_fallback_provider(prefix).get_model(model_name)
else:
return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep OpenAI-prefixed models stripped before provider lookup

The new fallback path reattaches every non-litellm prefix before calling the fallback provider, so MultiProvider().get_model("openai/gpt-4o") now passes "openai/gpt-4o" into OpenAIProvider instead of the previously stripped "gpt-4o". This regresses the documented openai/... behavior and can produce invalid model names at runtime for users who explicitly prefix OpenAI models.

Useful? React with 👍 / 👎.

@liangzhang-keepmoving
Copy link
Author

@codex review

Thank you for the feedback! I've already addressed this issue in the latest commit (16cedc1).

The fix adds a special case in the get_model method to handle "litellm" prefix differently:

elif prefix:
    # For unknown prefixes, pass the full model name (including prefix) to LiteLLM
    # Except for litellm prefix, where we should just pass the model name (without prefix)
    if prefix == "litellm":
        return self._get_fallback_provider(prefix).get_model(model_name)
    else:
        return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}")

This ensures that:

  1. For unknown prefixes like "openrouter", it passes the full model name including prefix to LiteLLM
  2. For "litellm" prefix, it only passes the model name without re-adding the prefix, maintaining the existing routing behavior

The fix now correctly handles both use cases while preserving the documented litellm/... flow.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 16cedc1384

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

if prefix == "litellm":
return self._get_fallback_provider(prefix).get_model(model_name)
else:
return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep openai/ models stripped before OpenAIProvider

The new elif prefix fallback path re-adds the prefix for every non-litellm value, so default MultiProvider calls like get_model("openai/gpt-4.1") now become OpenAIProvider.get_model("openai/gpt-4.1") instead of "gpt-4.1". In the common case where provider_map is unset (including responses_websocket_session), this breaks the documented openai/... routing and can send invalid model IDs to OpenAI.

Useful? React with 👍 / 👎.

@liangzhang-keepmoving
Copy link
Author

@codex review

Thank you for the feedback! I've addressed this issue in the latest commit (0066b352).

The fix now handles "openai" prefix the same way as "litellm" prefix:

elif prefix:
    # For known prefixes (openai, litellm), pass just the model name (without prefix)
    # For unknown prefixes (like openrouter), pass the full model name (including prefix) to LiteLLM
    if prefix == "litellm" or prefix == "openai":
        return self._get_fallback_provider(prefix).get_model(model_name)
    else:
        return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}")

This ensures that:

  1. For "openai" prefix, it only passes the model name without re-adding the prefix, maintaining the existing behavior
  2. For "litellm" prefix, it continues to pass just the model name
  3. For unknown prefixes like "openrouter", it passes the full model name including prefix to LiteLLM

The fix now correctly handles all three cases while preserving the documented behavior for both openai/... and litellm/... flows.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 16cedc1384

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

if prefix == "litellm":
return self._get_fallback_provider(prefix).get_model(model_name)
else:
return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve openai prefix stripping in fallback routing

When model_name is passed as openai/... and there is no explicit provider_map entry for openai (the default path), this branch re-adds the prefix and calls OpenAIProvider.get_model("openai/<model>") instead of <model>. Because _get_fallback_provider("openai") resolves to OpenAIProvider, this regresses the documented openai/<model> behavior and can send invalid model IDs to OpenAI in flows that rely on explicit openai/ prefixes.

Useful? React with 👍 / 👎.

@liangzhang-keepmoving
Copy link
Author

@openai/openai-agents-python-maintainers Could you please review this PR when you have a chance?

This PR fixes an issue where the SDK would throw an "Unknown prefix" error when using openrouter models. The fix:

  1. Modifies _create_fallback_provider to return LitellmProvider for all prefixes
  2. Updates get_model to pass the full model name (including prefix) to LiteLLM for unknown prefixes
  3. Maintains the existing behavior for "litellm" and "openai" prefixes

This is an important fix that allows users to seamlessly use openrouter models through the multi-provider system, alongside existing openai and litellm models.

The fix has already addressed all feedback from Codex review, ensuring that:

  • openai/ models are still stripped before OpenAIProvider
  • litellm/ models maintain their existing routing behavior
  • openrouter/ models work correctly with the full prefix passed to LiteLLM

Thank you for your time!

@seratch
Copy link
Member

seratch commented Feb 24, 2026

This seems to be duplicated of #2541

@seratch seratch closed this Feb 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants