-
Notifications
You must be signed in to change notification settings - Fork 609
fix(config): refine model type inference and tooltip focus behavior #1233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
📝 WalkthroughWalkthroughThis pull request expands the provider ecosystem by introducing a new "nano-gpt" provider with diverse model configurations, adjusts model type inference logic to prioritize image output detection, and enhances tooltip accessibility with a keyboard-focus property. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/main/presenter/configPresenter/modelConfig.ts (1)
59-74: Fix imageGeneration type mapping to use ModelType.ImageGeneration instead of ModelType.Chat.Line 69 maps
type: 'imageGeneration'toModelType.Chat, butModelType.ImageGenerationexists and is the correct return value. This causes all models withtype: 'imageGeneration'(100+ in providers.json) to be misclassified as chat models. This breaks image generation functionality across multiple providers (Vertex, Gemini, OpenAI, Zhipu, Grok, Minimax, Doubao) that depend on detectingModelType.ImageGenerationto trigger proper image generation handlers.Change line 69 from
return ModelType.Chattoreturn ModelType.ImageGeneration.
🧹 Nitpick comments (1)
src/main/presenter/configPresenter/modelConfig.ts (1)
54-57: Clarify the rationale for prioritizing modality over explicit type.Moving the image output modality check to the highest priority changes the inference behavior. This means
modalities.outputis now considered more authoritative than the explicitmodel.typefrom provider.json. While this may be intentional (especially for addressing OpenRouter mislabels as mentioned in the PR), it could have unintended effects on multimodal chat models that happen to include image in their output modalities.Consider adding a comment explaining why modality takes precedence, particularly in the context of OpenRouter or other providers that may mislabel models.
🔎 Suggested documentation improvement
- // Priority 1: Output modality indicates image generation + // Priority 1: Output modality indicates image generation + // Note: Prioritized over model.type to handle providers (e.g., OpenRouter) that + // may mislabel chat models as 'imageGeneration'. Modality is more reliable. if (Array.isArray(model.modalities?.output) && model.modalities.output.includes('image')) { return ModelType.ImageGeneration }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
resources/model-db/providers.jsonsrc/main/presenter/configPresenter/modelConfig.tssrc/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
🧰 Additional context used
📓 Path-based instructions (25)
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and maintain strict TypeScript type checking for all files
**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs
Files:
src/main/presenter/configPresenter/modelConfig.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Organize core business logic into dedicated Presenter classes, with one presenter per functional domain
Files:
src/main/presenter/configPresenter/modelConfig.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use EventBus from
src/main/eventbus.tsfor main-to-renderer communication, broadcasting events viamainWindow.webContents.send()
src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations
src/main/**/*.ts: Electron main process code belongs insrc/main/with presenters inpresenter/(Window/Tab/Thread/Mcp/Config/LLMProvider) andeventbus.tsfor app events
Use the Presenter pattern in the main process for UI coordination
Files:
src/main/presenter/configPresenter/modelConfig.ts
src/main/presenter/configPresenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Store and retrieve custom prompts via
configPresenter.getCustomPrompts()for config-based data source management
Files:
src/main/presenter/configPresenter/modelConfig.ts
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits
Files:
src/main/presenter/configPresenter/modelConfig.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
Write logs and comments in English
Files:
src/main/presenter/configPresenter/modelConfig.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs
Files:
src/main/presenter/configPresenter/modelConfig.ts
src/**/*
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
New features should be developed in the
srcdirectory
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
src/main/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Main process code for Electron should be placed in
src/main
Files:
src/main/presenter/configPresenter/modelConfig.ts
src/**/*.{ts,tsx,vue,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use Prettier with single quotes, no semicolons, and 100 character width
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
src/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use OxLint for linting JavaScript and TypeScript files
Files:
src/main/presenter/configPresenter/modelConfig.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names
Files:
src/main/presenter/configPresenter/modelConfig.ts
src/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Use EventBus for inter-process communication events
Files:
src/main/presenter/configPresenter/modelConfig.ts
**/*.vue
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.vue: Use Vue 3 Composition API for all components instead of Options API
Use Tailwind CSS with scoped styles for component styling
Files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
src/renderer/**/*.vue
📄 CodeRabbit inference engine (CLAUDE.md)
src/renderer/**/*.vue: All user-facing strings must use i18n keys via vue-i18n for internationalization
Ensure proper error handling and loading states in all UI components
Implement responsive design using Tailwind CSS utilities for all UI components
src/renderer/**/*.vue: Use composition API and declarative programming patterns; avoid options API
Structure files: exported component, composables, helpers, static content, types
Use PascalCase for component names (e.g., AuthWizard.vue)
Use Vue 3 with TypeScript, leveraging defineComponent and PropType
Use template syntax for declarative rendering
Use Shadcn Vue, Radix Vue, and Tailwind for components and styling
Implement responsive design with Tailwind CSS; use a mobile-first approach
Use Suspense for asynchronous components
Use <script setup> syntax for concise component definitions
Prefer 'lucide:' icon family as the primary choice for Iconify icons
Import Icon component from '@iconify/vue' and use with lucide icons following pattern '{collection}:{icon-name}'
Files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
src/renderer/src/**/*.{vue,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)
src/renderer/src/**/*.{vue,ts,tsx}: All user-facing strings must use i18n keys with vue-i18n framework in the renderer
Import and use useI18n() composable with the t() function to access translations in Vue components and TypeScript files
Use the dynamic locale.value property to switch languages at runtime
Avoid hardcoding user-facing text and ensure all user-visible text uses the i18n translation system
Files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
src/renderer/**/*.{vue,js,ts}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Renderer process code should be placed in
src/renderer(Vue 3 application)
Files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability in Vue.js applications
Implement proper state management with Pinia in Vue.js applications
Utilize Vue Router for navigation and route management in Vue.js applications
Leverage Vue's built-in reactivity system for efficient data handling
Files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
src/renderer/src/**/*.vue
📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)
Use scoped styles to prevent CSS conflicts between Vue components
Files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
src/renderer/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,tsx,vue}: Write concise, technical TypeScript code with accurate examples
Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError)
Avoid enums; use const objects instead
Use arrow functions for methods and computed properties
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statementsVue 3 app code in
src/renderer/srcshould be organized intocomponents/,stores/,views/,i18n/,lib/directories with shell UI insrc/renderer/shell/
Files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
src/renderer/**
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
Use lowercase with dashes for directories (e.g., components/auth-wizard)
Files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
src/renderer/**/*.{ts,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching
Leverage ref, reactive, and computed for reactive state management
Use provide/inject for dependency injection when appropriate
Use Iconify/Vue for icon implementation
Files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
src/renderer/src/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (AGENTS.md)
src/renderer/src/**/*.{ts,tsx,vue}: Use TypeScript with Vue 3 Composition API for the renderer application
All user-facing strings must use vue-i18n keys insrc/renderer/src/i18n
Files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
src/renderer/src/components/**/*.vue
📄 CodeRabbit inference engine (AGENTS.md)
src/renderer/src/components/**/*.vue: Use Tailwind for styles in Vue components
Vue component files must use PascalCase naming (e.g.,ChatInput.vue)
Files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
🧠 Learnings (9)
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms
Applied to files:
src/main/presenter/configPresenter/modelConfig.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType
Applied to files:
src/main/presenter/configPresenter/modelConfig.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `reasoning`, `text`, `image_data`, and `usage` events by processing and forwarding them through `STREAM_EVENTS.RESPONSE` events to the frontend
Applied to files:
src/main/presenter/configPresenter/modelConfig.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation
Applied to files:
src/main/presenter/configPresenter/modelConfig.ts
📚 Learning: 2025-11-25T05:26:43.510Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-11-25T05:26:43.510Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : Avoid hardcoding user-facing text and ensure all user-visible text uses the i18n translation system
Applied to files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/renderer/**/*.vue : All user-facing strings must use i18n keys via vue-i18n for internationalization
Applied to files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
📚 Learning: 2025-11-25T05:28:04.454Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.454Z
Learning: Applies to src/renderer/**/*.{ts,vue} : Use Iconify/Vue for icon implementation
Applied to files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
📚 Learning: 2025-11-25T05:26:43.510Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-11-25T05:26:43.510Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : All user-facing strings must use i18n keys with vue-i18n framework in the renderer
Applied to files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
📚 Learning: 2025-11-25T05:28:20.513Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T05:28:20.513Z
Learning: Applies to src/renderer/src/**/*.{ts,tsx,vue} : All user-facing strings must use vue-i18n keys in `src/renderer/src/i18n`
Applied to files:
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue
🧬 Code graph analysis (1)
src/main/presenter/configPresenter/modelConfig.ts (1)
src/shared/types/model-db.ts (1)
ProviderModel(60-60)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (4)
resources/model-db/providers.json (3)
46576-47235: New nano-gpt provider looks well-structured.All 22 models have consistent schema with required fields (id, name, modalities, limits, etc.). The text-to-text modalities are properly defined for the type inference logic.
48733-48791: Model data updates look appropriate.The updates to model IDs, output limits, release dates, and costs are consistent with the existing schema. The new
EXAONE 4.0.1 32Bmodel (lines 48762-48791) is properly structured with all required fields.
50372-50403: New MiniMax-M2.1 model is well-structured.The model includes all required fields with proper modalities definition.
src/renderer/src/components/ChatConfig/ConfigFieldHeader.vue (1)
27-27: LGTM! Good accessibility improvement.Adding
ignore-non-keyboard-focusprevents the tooltip from auto-showing when triggered by non-keyboard interactions (e.g., when a popover opens). This aligns with the PR objective and improves the user experience.
| { | ||
| "id": "ernie-irag-edit", | ||
| "name": "ernie-irag-edit", | ||
| "display_name": "ernie-irag-edit", | ||
| "modalities": { | ||
| "input": [ | ||
| "text", | ||
| "image" | ||
| ] | ||
| }, | ||
| "limit": { | ||
| "context": 8192, | ||
| "output": 8192 | ||
| }, | ||
| "tool_call": true, | ||
| "reasoning": { | ||
| "supported": false | ||
| }, | ||
| "cost": { | ||
| "input": 2, | ||
| "output": 0, | ||
| "cache_read": 0 | ||
| }, | ||
| "type": "imageGeneration" | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing output modalities for image generation model.
The ernie-irag-edit model has type: "imageGeneration" but is missing the modalities.output field. Given the PR objective to fix model type inference, and the AI summary indicating that the inference logic now checks modalities.output for 'image', this model should explicitly define its output modalities.
🔎 Proposed fix
"modalities": {
"input": [
"text",
"image"
- ]
+ ],
+ "output": [
+ "image"
+ ]
},🤖 Prompt for AI Agents
In resources/model-db/providers.json around lines 74639 to 74663, the
ernie-irag-edit entry is marked as type "imageGeneration" but lacks a
modalities.output array; add an output modality array including "image" (and
"text" if the model also returns captions/metadata) so the type-inference logic
that checks modalities.output for 'image' will correctly detect it as an
image-generating model.
| { | ||
| "id": "sophnet-glm-4.7", | ||
| "name": "sophnet-glm-4.7", | ||
| "display_name": "sophnet-glm-4.7", | ||
| "limit": { | ||
| "context": 8192, | ||
| "output": 8192 | ||
| }, | ||
| "tool_call": false, | ||
| "reasoning": { | ||
| "supported": false | ||
| }, | ||
| "cost": { | ||
| "input": 0.273974, | ||
| "output": 1.095896, | ||
| "cache_read": 0.273974 | ||
| }, | ||
| "type": "chat" | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing modalities field.
The sophnet-glm-4.7 model is missing the modalities field that other models consistently define. This could affect type inference behavior.
🔎 Proposed fix
"id": "sophnet-glm-4.7",
"name": "sophnet-glm-4.7",
"display_name": "sophnet-glm-4.7",
+ "modalities": {
+ "input": [
+ "text"
+ ],
+ "output": [
+ "text"
+ ]
+ },
"limit": {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| "id": "sophnet-glm-4.7", | |
| "name": "sophnet-glm-4.7", | |
| "display_name": "sophnet-glm-4.7", | |
| "limit": { | |
| "context": 8192, | |
| "output": 8192 | |
| }, | |
| "tool_call": false, | |
| "reasoning": { | |
| "supported": false | |
| }, | |
| "cost": { | |
| "input": 0.273974, | |
| "output": 1.095896, | |
| "cache_read": 0.273974 | |
| }, | |
| "type": "chat" | |
| }, | |
| { | |
| "id": "sophnet-glm-4.7", | |
| "name": "sophnet-glm-4.7", | |
| "display_name": "sophnet-glm-4.7", | |
| "modalities": { | |
| "input": [ | |
| "text" | |
| ], | |
| "output": [ | |
| "text" | |
| ] | |
| }, | |
| "limit": { | |
| "context": 8192, | |
| "output": 8192 | |
| }, | |
| "tool_call": false, | |
| "reasoning": { | |
| "supported": false | |
| }, | |
| "cost": { | |
| "input": 0.273974, | |
| "output": 1.095896, | |
| "cache_read": 0.273974 | |
| }, | |
| "type": "chat" | |
| }, |
🤖 Prompt for AI Agents
In resources/model-db/providers.json around lines 79996 to 80014, the model
entry for "sophnet-glm-4.7" is missing the modalities field; add a modalities
array consistent with other chat models (for example "modalities": ["text"])
placed at the same object level as "type" and "tool_call" so type inference and
validation work correctly.
| { | ||
| "id": "azure-deepseek-v3.2", | ||
| "name": "azure-deepseek-v3.2", | ||
| "display_name": "azure-deepseek-v3.2", | ||
| "limit": { | ||
| "context": 8192, | ||
| "output": 8192 | ||
| }, | ||
| "tool_call": false, | ||
| "reasoning": { | ||
| "supported": false | ||
| }, | ||
| "cost": { | ||
| "input": 0.58, | ||
| "output": 1.680028 | ||
| }, | ||
| "type": "chat" | ||
| }, | ||
| { | ||
| "id": "azure-deepseek-v3.2-speciale", | ||
| "name": "azure-deepseek-v3.2-speciale", | ||
| "display_name": "azure-deepseek-v3.2-speciale", | ||
| "limit": { | ||
| "context": 8192, | ||
| "output": 8192 | ||
| }, | ||
| "tool_call": false, | ||
| "reasoning": { | ||
| "supported": false | ||
| }, | ||
| "cost": { | ||
| "input": 0.58, | ||
| "output": 1.680028 | ||
| }, | ||
| "type": "chat" | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing modalities field on both Azure DeepSeek models.
Both azure-deepseek-v3.2 and azure-deepseek-v3.2-speciale models are missing the modalities field. For consistency with other models and to ensure proper type inference, consider adding the modalities definition.
🔎 Proposed fix for azure-deepseek-v3.2 (apply similar to the other)
"id": "azure-deepseek-v3.2",
"name": "azure-deepseek-v3.2",
"display_name": "azure-deepseek-v3.2",
+ "modalities": {
+ "input": [
+ "text"
+ ],
+ "output": [
+ "text"
+ ]
+ },
"limit": {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| "id": "azure-deepseek-v3.2", | |
| "name": "azure-deepseek-v3.2", | |
| "display_name": "azure-deepseek-v3.2", | |
| "limit": { | |
| "context": 8192, | |
| "output": 8192 | |
| }, | |
| "tool_call": false, | |
| "reasoning": { | |
| "supported": false | |
| }, | |
| "cost": { | |
| "input": 0.58, | |
| "output": 1.680028 | |
| }, | |
| "type": "chat" | |
| }, | |
| { | |
| "id": "azure-deepseek-v3.2-speciale", | |
| "name": "azure-deepseek-v3.2-speciale", | |
| "display_name": "azure-deepseek-v3.2-speciale", | |
| "limit": { | |
| "context": 8192, | |
| "output": 8192 | |
| }, | |
| "tool_call": false, | |
| "reasoning": { | |
| "supported": false | |
| }, | |
| "cost": { | |
| "input": 0.58, | |
| "output": 1.680028 | |
| }, | |
| "type": "chat" | |
| }, | |
| { | |
| "id": "azure-deepseek-v3.2", | |
| "name": "azure-deepseek-v3.2", | |
| "display_name": "azure-deepseek-v3.2", | |
| "modalities": { | |
| "input": [ | |
| "text" | |
| ], | |
| "output": [ | |
| "text" | |
| ] | |
| }, | |
| "limit": { | |
| "context": 8192, | |
| "output": 8192 | |
| }, | |
| "tool_call": false, | |
| "reasoning": { | |
| "supported": false | |
| }, | |
| "cost": { | |
| "input": 0.58, | |
| "output": 1.680028 | |
| }, | |
| "type": "chat" | |
| }, | |
| { | |
| "id": "azure-deepseek-v3.2-speciale", | |
| "name": "azure-deepseek-v3.2-speciale", | |
| "display_name": "azure-deepseek-v3.2-speciale", | |
| "limit": { | |
| "context": 8192, | |
| "output": 8192 | |
| }, | |
| "tool_call": false, | |
| "reasoning": { | |
| "supported": false | |
| }, | |
| "cost": { | |
| "input": 0.58, | |
| "output": 1.680028 | |
| }, | |
| "type": "chat" | |
| }, |
🤖 Prompt for AI Agents
In resources/model-db/providers.json around lines 83262 to 83297, both
azure-deepseek-v3.2 and azure-deepseek-v3.2-speciale are missing the modalities
field; add a modalities property to each entry (e.g., modalities:
["text","image"] to match other DeepSeek/vision-capable models) so type
inference and consistency are preserved, ensuring placement follows the same
ordering as other model objects and updating any schema-validation tests if they
expect this field.
Summary by CodeRabbit
Release Notes
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.