Skip to content

Conversation

yyhhyyyyyy
Copy link
Collaborator

@yyhhyyyyyy yyhhyyyyyy commented Aug 3, 2025

close #543

  1. Add RateLimitManager for independent provider queue management
  2. Implement time interval-based rate limiting configuration
  3. Add real-time status indicators in chat interface and title bar
  4. Create provider rate limit settings in configuration panel

Summary by CodeRabbit

  • New Features

    • Introduced rate limiting controls for LLM providers, allowing users to set and manage request limits per provider.
    • Added real-time rate limit status indicators in the chat input and model selection UI, showing queue length, wait times, and current QPS.
    • Displayed detailed rate limit messages and progress bars when requests are queued due to rate limiting.
    • Added a dedicated settings panel for configuring provider rate limits.
  • Localization

    • Added comprehensive rate limit-related translations for multiple languages in chat and settings interfaces.
  • Enhancements

    • Improved chat streaming to display rate limit status updates and queue information during message processing.
    • Enabled quick access to provider settings and provider switching directly from rate limit messages.

Copy link
Contributor

coderabbitai bot commented Aug 3, 2025

Walkthrough

This change introduces a comprehensive rate limiting feature for API model requests. It adds backend rate limit logic, interfaces, and state management, as well as frontend UI components and localization for configuring, displaying, and responding to rate limit status. The update covers both core logic and user interface, supporting multiple languages.

Changes

Cohort / File(s) Change Summary
Backend Rate Limiting Logic
src/main/presenter/llmProviderPresenter/index.ts, src/main/events.ts
Implements per-provider rate limiting with queueing, configuration, and status reporting. Adds new events for rate limiting.
Frontend UI: Rate Limit Status & Controls
src/renderer/src/components/ChatInput.vue, src/renderer/src/components/TitleView.vue, src/renderer/src/components/message/MessageBlockAction.vue, src/renderer/src/components/settings/ModelProviderSettingsDetail.vue, src/renderer/src/components/settings/ProviderRateLimitConfig.vue
Adds UI to display rate limit status in chat input and model selection, shows rate limit waiting state in messages, and introduces a settings component for configuring provider rate limits.
Frontend State & Event Handling
src/renderer/src/stores/chat.ts, src/shared/chat.d.ts, src/shared/presenter.d.ts
Adds state and type support for rate limit events, exposes provider selector event, and updates message handling for rate limit blocks.
Event Constants
src/renderer/src/events.ts
Adds rate limit event name constants for IPC/event bus communication.
Localization: Rate Limit UI
src/renderer/src/i18n/*/chat.json, src/renderer/src/i18n/*/settings.json
Adds new localized strings for rate limit status, controls, tooltips, and UI elements in all supported languages.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ChatInput/TitleView
    participant Renderer Store
    participant LLMProviderPresenter
    participant Provider API

    User->>ChatInput/TitleView: Sends message / selects model
    ChatInput/TitleView->>Renderer Store: Dispatch send/stream action
    Renderer Store->>LLMProviderPresenter: Request stream completion
    LLMProviderPresenter->>LLMProviderPresenter: Check rate limit for provider
    alt Can execute immediately
        LLMProviderPresenter->>LLMProviderPresenter: Record request
        LLMProviderPresenter->>Provider API: Send request
        Provider API-->>LLMProviderPresenter: Return result
        LLMProviderPresenter-->>Renderer Store: Stream event (normal)
    else Exceeds rate limit
        LLMProviderPresenter-->>Renderer Store: Stream event (type: rate_limit, with queue info)
        LLMProviderPresenter->>LLMProviderPresenter: Queue request
        Note over LLMProviderPresenter: Wait until allowed by QPS
        LLMProviderPresenter->>Provider API: Send request when allowed
        Provider API-->>LLMProviderPresenter: Return result
        LLMProviderPresenter-->>Renderer Store: Stream event (normal)
    end
    Renderer Store-->>ChatInput/TitleView: Update UI with rate limit status
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40 minutes

Assessment against linked issues

Objective Addressed Explanation
Implement per-provider API request rate limiting logic (backend and state) (#543)
Provide UI controls to configure rate limit per provider (#543)
Display real-time rate limit status and queue/wait info in chat and model selection UI (#543)
Show user feedback/messages when requests are delayed due to rate limiting (#543)

Assessment against linked issues: Out-of-scope changes

No out-of-scope changes detected. All functional changes are directly related to adding, configuring, and displaying API request rate limiting as described in the linked issue.

Poem

A rabbit hops and counts the ticks,
"No more stampedes, let's pace our clicks!"
With queues and limits, the flow is neat,
Providers rest, no frantic beat.
Now every bunny, calm and bright,
Knows when to pause—and when to write!
🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b7a1f65 and f012117.

📒 Files selected for processing (11)
  • src/main/presenter/llmProviderPresenter/index.ts (8 hunks)
  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue (1 hunks)
  • src/renderer/src/i18n/en-US/settings.json (1 hunks)
  • src/renderer/src/i18n/fa-IR/settings.json (1 hunks)
  • src/renderer/src/i18n/fr-FR/settings.json (1 hunks)
  • src/renderer/src/i18n/ja-JP/settings.json (1 hunks)
  • src/renderer/src/i18n/ko-KR/settings.json (1 hunks)
  • src/renderer/src/i18n/ru-RU/settings.json (1 hunks)
  • src/renderer/src/i18n/zh-CN/settings.json (1 hunks)
  • src/renderer/src/i18n/zh-HK/settings.json (1 hunks)
  • src/renderer/src/i18n/zh-TW/settings.json (1 hunks)
✅ Files skipped from review due to trivial changes (4)
  • src/renderer/src/i18n/en-US/settings.json
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/ja-JP/settings.json
  • src/renderer/src/i18n/ko-KR/settings.json
🚧 Files skipped from review as they are similar to previous changes (6)
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
  • src/renderer/src/i18n/fr-FR/settings.json
  • src/renderer/src/i18n/fa-IR/settings.json
  • src/renderer/src/i18n/zh-CN/settings.json
  • src/main/presenter/llmProviderPresenter/index.ts
🧰 Additional context used
📓 Path-based instructions (9)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Use English for logs and comments

Files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
src/renderer/src/**/*.vue

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.vue: Use Composition API for all Vue 3 components
Use Tailwind CSS with scoped styles for styling
Organize components by feature in src/renderer/src/
Follow existing component patterns in src/renderer/src/ when creating new UI components
Use Composition API with proper TypeScript typing for new UI components
Implement responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states for new UI components

Use scoped styles to prevent CSS conflicts between components

Files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
src/renderer/src/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.{ts,tsx,vue}: Use Pinia for frontend state management
Renderer to Main: Use usePresenter.ts composable for direct presenter method calls

Files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
src/renderer/src/**/*

📄 CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
src/renderer/**/*.{vue,ts}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
🧠 Learnings (18)
📚 Learning: applies to src/main/presenter/configpresenter/providers.ts : add provider configuration in configpre...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/**/*.{vue} : use vue 3 with typescript, leveraging definecomponent and propt...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use Vue 3 with TypeScript, leveraging defineComponent and PropType.

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/**/*.{vue} : leverage ref, reactive, and computed for reactive state managem...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Leverage ref, reactive, and computed for reactive state management.

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/**/*.{vue} : use composition api and declarative programming patterns; avoid...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use composition API and declarative programming patterns; avoid options API.

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/**/*.{vue} : use <script setup> syntax for concise component definitions....
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use <script setup> syntax for concise component definitions.

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : provider implementations should ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should aggregate and yield usage events as part of the standardized stream.

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/**/*.{vue} : use shadcn vue, radix vue, and tailwind for components and styl...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use Shadcn Vue, Radix Vue, and Tailwind for components and styling.

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/src/**/*.vue : add proper error handling and loading states for new ui compo...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/renderer/src/**/*.vue : Add proper error handling and loading states for new UI components

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/src/**/*.vue : use composition api with proper typescript typing for new ui ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/renderer/src/**/*.vue : Use Composition API with proper TypeScript typing for new UI components

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/src/**/*.{vue,ts,tsx,js,jsx} : use the composition api for better code organ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-best-practices.mdc:0-0
Timestamp: 2025-07-21T01:47:28.817Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx,js,jsx} : Use the Composition API for better code organization and reusability

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/**/*.{ts,tsx,vue} : avoid enums; use const objects instead....
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{ts,tsx,vue} : Avoid enums; use const objects instead.

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/**/*.{ts,tsx,vue} : use descriptive variable names with auxiliary verbs (e.g...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{ts,tsx,vue} : Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/**/*.{vue} : use suspense for asynchronous components....
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use Suspense for asynchronous components.

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/**/*.{ts,vue} : use usefetch and useasyncdata for data fetching....
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{ts,vue} : Use useFetch and useAsyncData for data fetching.

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx} : utilize actions for side effects and a...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/pinia-best-practices.mdc:0-0
Timestamp: 2025-07-21T01:47:03.479Z
Learning: Applies to src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx} : Utilize actions for side effects and asynchronous operations

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : provider implementations should ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield error events in the standardized format when errors occur.

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to **/*.{ts,tsx} : 提供用户友好的错误信息...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-07-21T01:46:19.702Z
Learning: Applies to **/*.{ts,tsx} : 提供用户友好的错误信息

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
📚 Learning: applies to src/renderer/**/*.{vue} : use provide/inject for dependency injection when appropriate....
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use provide/inject for dependency injection when appropriate.

Applied to files:

  • src/renderer/src/components/settings/ProviderRateLimitConfig.vue
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (8)
src/renderer/src/components/settings/ProviderRateLimitConfig.vue (8)

1-75: LGTM! Template follows Vue 3 and i18n best practices.

The template properly uses i18n keys for all user-facing text, implements responsive design with Tailwind CSS, and follows good Vue 3 patterns with conditional rendering and proper component structure.


77-96: Well-structured imports following established patterns.

The imports properly use Vue 3 Composition API, shadcn/ui components, and the required usePresenter composable pattern.


98-104: Props and emits properly typed.

Clean TypeScript definitions for props and emits following Vue 3 best practices.


106-119: Reactive state properly implemented.

Good use of ref() with descriptive variable names and proper TypeScript typing. State initialization follows Vue 3 best practices.


128-159: Excellent UX with validation and confirmation flows.

Well-implemented event handlers with proper validation, confirmation dialogs for edge cases, and user feedback via toast notifications. The logic appropriately handles potentially destructive actions.


186-215: Well-implemented time formatting with proper i18n.

The formatting functions handle edge cases appropriately and use proper internationalization for time-related strings. Clean logic for relative time calculations.


217-247: Proper event handling and lifecycle management.

Good implementation of IPC event listeners with proper cleanup to prevent memory leaks. The periodic status updates and event handling follow Vue 3 best practices.


249-266: Watchers properly implemented for reactive updates.

Good use of watchers to handle prop changes and preserve valid state. The deep watching for provider changes and value preservation logic is well-implemented.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

♻️ Duplicate comments (3)
src/renderer/src/i18n/zh-TW/chat.json (1)

15-19: Same duplicate tooltip keys as in en-US

rateLimit{Queue|Ready|Waiting}Tooltip appear under both "input" and top-level "rateLimit". Consolidate to a single definition and place it in common.json to avoid divergence across languages.

Also applies to: 50-52

src/renderer/src/i18n/ja-JP/chat.json (1)

15-19: Duplicate tooltip strings – unify

Identical tooltip strings are duplicated in two sections. Follow the refactor notes given in the en-US file.

Also applies to: 50-52

src/renderer/src/i18n/fa-IR/chat.json (1)

15-19: Duplicate tooltip strings – unify

The same tooltip keys are duplicated; keep only one set and relocate to common.json.

Also applies to: 50-52

🧹 Nitpick comments (13)
src/main/events.ts (1)

116-122: Keep main / renderer event enums in sync & avoid manual drift

RATE_LIMIT_EVENTS is now defined here and in src/renderer/src/events.ts.
Given the duplication, the two enums will inevitably diverge unless they are generated from a single source (e.g. a shared events.ts in src/shared).

Consider refactoring to export the enum from a common module that both processes can import, or at minimum add a unit-test that asserts the two objects contain identical keys/values.

src/renderer/src/events.ts (1)

87-93: Duplicated constant definition – prefer a shared source of truth

This block repeats the RATE_LIMIT_EVENTS literal already declared in src/main/events.ts.
While the comment at the top of each file justifies the duplication, experience shows minor edits (typos, new keys) are often applied to only one file, causing subtle runtime bugs.

Proposed direction:

-// 速率限制相关事件
-export const RATE_LIMIT_EVENTS = {
-  CONFIG_UPDATED: 'rate-limit:config-updated',
-  REQUEST_QUEUED: 'rate-limit:request-queued',
-  REQUEST_EXECUTED: 'rate-limit:request-executed',
-  LIMIT_EXCEEDED: 'rate-limit:limit-exceeded'
-}
+// Rate-limit events shared by main & renderer
+import { RATE_LIMIT_EVENTS } from '../../shared/events'  // path illustrative

If direct import is impossible (ESM/CJS boundary), generate the file during build-time from a single JSON template.

src/renderer/src/i18n/fa-IR/settings.json (1)

614-630: Consolidate generic time strings into common.json to avoid duplication

Keys like never, justNow, secondsAgo, minutesAgo, immediately, secondsLater already exist (e.g., Line 62 data.never) or are generic enough to be reused across the entire UI.
Duplicating them inside rateLimit scatters the same literal across files and languages, increasing maintenance cost and the risk of inconsistency.

Move these literals to src/renderer/src/i18n/common.json (or reference the existing ones) and keep only rate-limit-specific labels here.

src/renderer/src/i18n/zh-CN/chat.json (1)

49-53: Consider dot-separated leaf keys to follow naming convention

Guidelines require dot-separated, lowercase, descriptive keys (e.g. common.button.submit).
queueTooltip, readyTooltip, waitingTooltip could be renamed to tooltip.queue, tooltip.ready, tooltip.waiting for better hierarchy clarity.

src/renderer/src/i18n/zh-CN/settings.json (1)

614-630: Duplicate literals already exist elsewhere

never, justNow, secondsAgo, minutesAgo, immediately, secondsLater were added here but the same Chinese strings appear in chat.json and other setting sections.
Extract them to common.json and reuse to ensure one-source-of-truth across the app.

src/renderer/src/i18n/fr-FR/chat.json (1)

49-53: Key naming style

For alignment with the convention (dot hierarchy), prefer tooltip.queue, tooltip.ready, tooltip.waiting under rateLimit rather than camelCase leaf keys.

src/renderer/src/i18n/zh-HK/chat.json (1)

49-53: Adopt dot-separated leaf keys

Rename queueTooltip, readyTooltip, waitingTooltip inside rateLimit to a hierarchical form (tooltip.queue, …) for consistency with naming guidelines.

src/renderer/src/i18n/ko-KR/chat.json (1)

49-53: Consider consolidating duplicate translation keys.

The rateLimit section contains tooltip messages that duplicate content from the input section (lines 17-19 vs 50-52). While this might be intentional for different contexts, it could lead to maintenance issues.

Consider using shared keys in a common section or referencing the existing keys to ensure consistency across updates.

src/renderer/src/i18n/ru-RU/chat.json (1)

49-53: Same duplication concern as other language files.

The rateLimit section duplicates tooltip content from the input section. This creates the same maintenance risk across all language files.

Consider implementing a consistent approach to handling shared translation keys across all language files.

src/renderer/src/components/TitleView.vue (1)

283-283: Consider optimizing the polling interval.

Polling every second might impact performance, especially with multiple components. Consider:

  1. Using a longer interval (e.g., 2-3 seconds)
  2. Implementing adaptive polling based on queue status
  3. Using TypeScript's proper timer type
-let statusInterval: number | null = null
+let statusInterval: NodeJS.Timeout | null = null

 onMounted(async () => {
   // ... existing code ...

-  statusInterval = window.setInterval(loadRateLimitStatus, 1000)
+  // Poll more frequently when queued, less frequently otherwise
+  const startAdaptivePolling = () => {
+    const pollInterval = rateLimitStatus.value?.queueLength ? 1000 : 3000
+    statusInterval = window.setTimeout(() => {
+      loadRateLimitStatus().then(() => {
+        if (statusInterval) startAdaptivePolling()
+      })
+    }, pollInterval)
+  }
+  startAdaptivePolling()
 })

 onUnmounted(() => {
   if (statusInterval) {
-    clearInterval(statusInterval)
+    clearTimeout(statusInterval)
+    statusInterval = null
   }

Also applies to: 298-298, 301-304

src/main/presenter/llmProviderPresenter/index.ts (1)

283-288: Consider adding error handling and validation.

The rate limit integration could be more robust with additional safeguards.

Add validation in the update method:

updateProviderRateLimit(providerId: string, enabled: boolean, qpsLimit: number): void {
  if (qpsLimit <= 0 || qpsLimit > 1000) {
    throw new Error('QPS limit must be between 1 and 1000')
  }
  
  try {
    this.rateLimitPresenter.updateProviderConfig(providerId, {
      enabled,
      qpsLimit
    })
  } catch (error) {
    console.error(`Failed to update rate limit for provider ${providerId}:`, error)
    throw error
  }
}

Consider adding metrics logging for monitoring:

// In startStreamCompletion, after rate limit check
if (!canExecute) {
  console.log(`[RateLimit] Request queued for provider ${providerId}, queue length: ${queueLength}`)
}

Also applies to: 441-464

src/main/presenter/rateLimitPresenter/rateLimitManager.ts (2)

135-146: Consider defensive programming for queue operations

While the current implementation is correct, using optional chaining would make it more robust against potential edge cases.

 while (state.queue.length > 0) {
   if (this.canExecuteImmediately(providerId)) {
     const queueItem = state.queue.shift()
-    if (queueItem) {
-      this.recordRequest(providerId)
-      queueItem.resolve()
-
-      console.log(
-        `[RateLimitManager] Request executed for ${providerId}, remaining queue: ${state.queue.length}`
-      )
-    }
+    this.recordRequest(providerId)
+    queueItem?.resolve()
+
+    console.log(
+      `[RateLimitManager] Request executed for ${providerId}, remaining queue: ${state.queue.length}`
+    )

206-227: Remove unnecessary 'any' type annotation

The TypeScript compiler can infer the correct type from the return type declaration.

-    const status: Record<string, any> = {}
+    const status: Record<string, {
+      config: RateLimitConfig
+      currentQps: number
+      queueLength: number
+      lastRequestTime: number
+    }> = {}

Comment on lines 28 to 43
updateProviderConfig(providerId: string, config: Partial<RateLimitConfig>): void {
this.rateLimitManager.setProviderConfig(providerId, config)
const provider = this.configPresenter.getProviderById(providerId)
if (provider) {
const updatedProvider: LLM_PROVIDER = {
...provider,
rateLimit: {
enabled: config.enabled ?? provider.rateLimit?.enabled ?? false,
qpsLimit: config.qpsLimit ?? provider.rateLimit?.qpsLimit ?? 10
}
}

this.configPresenter.setProviderById(providerId, updatedProvider)
console.log(`[RateLimitPresenter] Updated persistent config for ${providerId}`)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Ensure null safety when updating provider config

The method should validate that the provider exists before attempting to update to avoid potential issues.

 updateProviderConfig(providerId: string, config: Partial<RateLimitConfig>): void {
-  this.rateLimitManager.setProviderConfig(providerId, config)
   const provider = this.configPresenter.getProviderById(providerId)
   if (provider) {
+    this.rateLimitManager.setProviderConfig(providerId, config)
     const updatedProvider: LLM_PROVIDER = {
       ...provider,
       rateLimit: {
         enabled: config.enabled ?? provider.rateLimit?.enabled ?? false,
         qpsLimit: config.qpsLimit ?? provider.rateLimit?.qpsLimit ?? 10
       }
     }

     this.configPresenter.setProviderById(providerId, updatedProvider)
     console.log(`[RateLimitPresenter] Updated persistent config for ${providerId}`)
+  } else {
+    console.warn(`[RateLimitPresenter] Provider ${providerId} not found`)
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
updateProviderConfig(providerId: string, config: Partial<RateLimitConfig>): void {
this.rateLimitManager.setProviderConfig(providerId, config)
const provider = this.configPresenter.getProviderById(providerId)
if (provider) {
const updatedProvider: LLM_PROVIDER = {
...provider,
rateLimit: {
enabled: config.enabled ?? provider.rateLimit?.enabled ?? false,
qpsLimit: config.qpsLimit ?? provider.rateLimit?.qpsLimit ?? 10
}
}
this.configPresenter.setProviderById(providerId, updatedProvider)
console.log(`[RateLimitPresenter] Updated persistent config for ${providerId}`)
}
}
updateProviderConfig(providerId: string, config: Partial<RateLimitConfig>): void {
const provider = this.configPresenter.getProviderById(providerId)
if (provider) {
this.rateLimitManager.setProviderConfig(providerId, config)
const updatedProvider: LLM_PROVIDER = {
...provider,
rateLimit: {
enabled: config.enabled ?? provider.rateLimit?.enabled ?? false,
qpsLimit: config.qpsLimit ?? provider.rateLimit?.qpsLimit ?? 10
}
}
this.configPresenter.setProviderById(providerId, updatedProvider)
console.log(`[RateLimitPresenter] Updated persistent config for ${providerId}`)
} else {
console.warn(`[RateLimitPresenter] Provider ${providerId} not found`)
}
}
🤖 Prompt for AI Agents
In src/main/presenter/rateLimitPresenter/index.ts around lines 28 to 43, the
updateProviderConfig method should first check if the provider exists before
proceeding with the update to avoid null reference errors. Add a guard clause or
conditional check to confirm the provider is not null or undefined before
creating the updatedProvider object and calling setProviderById. If the provider
does not exist, the method should exit early or handle the case gracefully
without attempting to update.

Comment on lines 72 to 76
const now = Date.now()
const intervalMs = (1 / state.config.qpsLimit) * 1000

return now - state.lastRequestTime >= intervalMs
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential division by zero issue

When state.config.qpsLimit is 0, this will result in Infinity for intervalMs, which could cause unexpected behavior in timing calculations.

Add validation to prevent zero QPS limit:

 const now = Date.now()
+if (state.config.qpsLimit <= 0) {
+  return true // Or handle as appropriate for your use case
+}
 const intervalMs = (1 / state.config.qpsLimit) * 1000

 return now - state.lastRequestTime >= intervalMs
🤖 Prompt for AI Agents
In src/main/presenter/rateLimitPresenter/rateLimitManager.ts around lines 72 to
76, the calculation of intervalMs divides by state.config.qpsLimit without
checking if it is zero, which can cause division by zero and result in Infinity.
Add a validation step before this calculation to ensure state.config.qpsLimit is
greater than zero; if it is zero or less, handle it appropriately by either
returning false immediately or setting a default interval to avoid division by
zero errors.

Comment on lines 185 to 194
getCurrentQps(providerId: string): number {
const state = this.providerStates.get(providerId)
if (!state || !state.config.enabled || state.lastRequestTime === 0) return 0

const now = Date.now()
const timeSinceLastRequest = now - state.lastRequestTime
const intervalMs = (1 / state.config.qpsLimit) * 1000

return timeSinceLastRequest < intervalMs ? 1 : 0
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Method name doesn't match implementation and potential division by zero

  1. The method has the same division by zero issue when qpsLimit is 0
  2. The method returns 1 or 0, not actual QPS. Consider renaming to isWithinRateLimit or implementing actual QPS calculation
 getCurrentQps(providerId: string): number {
   const state = this.providerStates.get(providerId)
   if (!state || !state.config.enabled || state.lastRequestTime === 0) return 0

+  if (state.config.qpsLimit <= 0) return 0
+
   const now = Date.now()
   const timeSinceLastRequest = now - state.lastRequestTime
   const intervalMs = (1 / state.config.qpsLimit) * 1000

   return timeSinceLastRequest < intervalMs ? 1 : 0
 }

Consider implementing actual QPS tracking if needed:

// Track request timestamps in a sliding window
private requestTimestamps: Map<string, number[]> = new Map()

getCurrentQps(providerId: string): number {
  const timestamps = this.requestTimestamps.get(providerId) || []
  const now = Date.now()
  const oneSecondAgo = now - 1000
  
  // Filter timestamps within the last second
  const recentRequests = timestamps.filter(t => t > oneSecondAgo)
  return recentRequests.length
}
🤖 Prompt for AI Agents
In src/main/presenter/rateLimitPresenter/rateLimitManager.ts around lines 185 to
194, the method named getCurrentQps does not actually return the current QPS but
a binary 1 or 0, and it risks division by zero if qpsLimit is 0. To fix this,
either rename the method to something like isWithinRateLimit to reflect its
boolean nature or implement a true QPS calculation by tracking request
timestamps in a sliding window and returning the count of requests in the last
second. Also, ensure to handle the case when qpsLimit is zero to avoid division
by zero errors.

Comment on lines +396 to +404
const canSendImmediately = computed(() => {
if (!rateLimitStatus.value?.config.enabled) return true
const now = Date.now()
const intervalMs = (1 / rateLimitStatus.value.config.qpsLimit) * 1000
const timeSinceLastRequest = now - rateLimitStatus.value.lastRequestTime
return timeSinceLastRequest >= intervalMs
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential division by zero in interval calculation

When qpsLimit is 0, the interval calculation will produce Infinity.

 const canSendImmediately = computed(() => {
   if (!rateLimitStatus.value?.config.enabled) return true

+  if (rateLimitStatus.value.config.qpsLimit <= 0) return true
+
   const now = Date.now()
   const intervalMs = (1 / rateLimitStatus.value.config.qpsLimit) * 1000
   const timeSinceLastRequest = now - rateLimitStatus.value.lastRequestTime

   return timeSinceLastRequest >= intervalMs
 })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const canSendImmediately = computed(() => {
if (!rateLimitStatus.value?.config.enabled) return true
const now = Date.now()
const intervalMs = (1 / rateLimitStatus.value.config.qpsLimit) * 1000
const timeSinceLastRequest = now - rateLimitStatus.value.lastRequestTime
return timeSinceLastRequest >= intervalMs
})
const canSendImmediately = computed(() => {
if (!rateLimitStatus.value?.config.enabled) return true
// Avoid division-by-zero when qpsLimit is zero or negative
if (rateLimitStatus.value.config.qpsLimit <= 0) return true
const now = Date.now()
const intervalMs = (1 / rateLimitStatus.value.config.qpsLimit) * 1000
const timeSinceLastRequest = now - rateLimitStatus.value.lastRequestTime
return timeSinceLastRequest >= intervalMs
})
🤖 Prompt for AI Agents
In src/renderer/src/components/ChatInput.vue around lines 396 to 404, the
calculation of intervalMs divides 1 by qpsLimit without checking if qpsLimit is
zero, which can cause a division by zero and result in Infinity. To fix this,
add a guard condition to check if qpsLimit is greater than zero before
performing the division; if qpsLimit is zero or less, handle it appropriately by
returning false or true based on the intended behavior to avoid invalid interval
calculations.

Comment on lines +110 to +133
const updateRateLimitConfig = async () => {
try {
const qpsValue = convertIntervalToQps(intervalValue.value)
await llmPresenter.updateProviderRateLimit(props.provider.id, rateLimitEnabled.value, qpsValue)
emit('configChanged')
await loadStatus()
} catch (error) {
console.error('Failed to update rate limit config:', error)
}
}
// 加载状态
const loadStatus = async () => {
try {
const rateLimitStatus = await llmPresenter.getProviderRateLimitStatus(props.provider.id)
status.value = {
currentQps: rateLimitStatus.currentQps,
queueLength: rateLimitStatus.queueLength,
lastRequestTime: rateLimitStatus.lastRequestTime
}
} catch (error) {
console.error('Failed to load rate limit status:', error)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance error handling and user feedback.

The component lacks user-facing error feedback and loading states during async operations. Users won't know if configuration updates succeed or fail.

Consider adding:

  • Loading state during updateRateLimitConfig
  • Toast notifications for success/error states
  • Retry mechanisms for failed operations
  • Proper error boundaries as specified in the coding guidelines
+const isLoading = ref(false)
+const error = ref<string | null>(null)

 const updateRateLimitConfig = async () => {
+  isLoading.value = true
+  error.value = null
   try {
     const qpsValue = convertIntervalToQps(intervalValue.value)
     await llmPresenter.updateProviderRateLimit(props.provider.id, rateLimitEnabled.value, qpsValue)
     emit('configChanged')
     await loadStatus()
   } catch (error) {
-    console.error('Failed to update rate limit config:', error)
+    error.value = 'Failed to update rate limit configuration'
+    console.error('Failed to update rate limit config:', error)
+  } finally {
+    isLoading.value = false
   }
 }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/renderer/src/components/settings/ProviderRateLimitConfig.vue around lines
110 to 133, the async functions updateRateLimitConfig and loadStatus lack user
feedback and loading state indicators. Add a loading state variable to show
progress during these operations, implement toast notifications to inform users
of success or failure, and include retry logic for failed attempts. Also, wrap
these calls with proper error boundaries as per coding guidelines to improve
robustness and user experience.

Comment on lines +150 to +163
return t('chat.rateLimit.queueTooltip', {
count: rateLimitStatus.value.queueLength,
interval: intervalSeconds
})
}
if (canExecuteImmediately.value) {
return t('chat.rateLimit.readyTooltip', { interval: intervalSeconds })
}
const waitTime = Math.ceil(
(rateLimitStatus.value.lastRequestTime + intervalSeconds * 1000 - Date.now()) / 1000
)
return t('chat.rateLimit.waitingTooltip', { seconds: waitTime, interval: intervalSeconds })
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify i18n keys and event constants.

Please ensure the following i18n keys are defined in the translation files:

  • chat.rateLimit.queueTooltip
  • chat.rateLimit.readyTooltip
  • chat.rateLimit.waitingTooltip

🏁 Script executed:

#!/bin/bash
# Check if the i18n keys exist in translation files
echo "Checking for rate limit i18n keys..."
fd -e json -e yaml . src/renderer/src/locales | xargs rg -l "chat\.rateLimit\.(queueTooltip|readyTooltip|waitingTooltip)"

# Check if RATE_LIMIT_EVENTS are properly defined
echo -e "\nChecking RATE_LIMIT_EVENTS definition..."
rg "export.*RATE_LIMIT_EVENTS" src/renderer/src/events.ts -A 10

Length of output: 880


🏁 Script executed:

#!/bin/bash
# Verify translation keys in the correct i18n directory
echo "Searching for rate limit i18n keys in src/renderer/src/i18n..."
fd -e json src/renderer/src/i18n | xargs rg -H "chat\.rateLimit\.(queueTooltip|readyTooltip|waitingTooltip)"

Length of output: 976


Missing i18n keys for rate-limit tooltips

The t('chat.rateLimit.*') calls in TitleView.vue aren’t backed by any entries in your JSON translation files. Please add them under chat.rateLimit in src/renderer/src/i18n/common.json (and mirror in each language file):

{
  "chat": {
    "rateLimit": {
      "queueTooltip": "In queue: {{count}} (every {{interval}} s)",
      "readyTooltip": "Ready to execute (every {{interval}} s)",
      "waitingTooltip": "Please wait {{seconds}} s (every {{interval}} s)"
    }
  }
}

• File to update:

  • src/renderer/src/i18n/common.json
  • and then sync the same keys in all other language JSONs

RATE_LIMIT_EVENTS in src/renderer/src/events.ts are defined correctly—no changes needed there.

🤖 Prompt for AI Agents
In src/renderer/src/i18n/common.json, add the missing i18n keys under
"chat.rateLimit" as specified: "queueTooltip", "readyTooltip", and
"waitingTooltip" with their respective message templates. Then, propagate these
new keys and translations to all other language JSON files to ensure consistent
localization support for the rate-limit tooltips used in
src/renderer/src/components/TitleView.vue lines 150 to 163.

Comment on lines +15 to +19
"rateLimitQueue": "Queue {count}",
"rateLimitWait": "Wait {seconds}s",
"rateLimitQueueTooltip": "{count} requests in queue, {interval}s interval",
"rateLimitReadyTooltip": "Ready to send, {interval}s interval",
"rateLimitWaitingTooltip": "Wait {seconds}s more, {interval}s interval"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Duplicate tooltip keys – keep a single canonical location

rateLimit{Queue|Ready|Waiting}Tooltip exists twice: once under "input" (Lines 15-19) and again under the top-level "rateLimit" block (Lines 50-52).
Maintaining two copies will inevitably drift out of sync and bloats every language file.

Proposed actions

  1. Keep these keys only once (prefer the dedicated rateLimit.* section).
  2. Move them to i18n/common.json because they are re-used verbatim across multiple views.
  3. Delete the redundant entries from the "input" section in all language files.

Also applies to: 50-52

🤖 Prompt for AI Agents
In src/renderer/src/i18n/en-US/chat.json around lines 15 to 19 and 50 to 52, the
tooltip keys rateLimitQueueTooltip, rateLimitReadyTooltip, and
rateLimitWaitingTooltip are duplicated under both the "input" section and the
top-level "rateLimit" block. To fix this, remove these keys from the "input"
section and keep only the canonical versions under the "rateLimit" block. Then
move these keys to i18n/common.json since they are reused across multiple views,
ensuring a single source of truth and avoiding duplication in all language
files.

Comment on lines +14 to 20
"historyPlaceholder": "(按 Tab 填充)",
"rateLimitQueue": "队列 {count}",
"rateLimitWait": "等待 {seconds}s",
"rateLimitQueueTooltip": "队列中有 {count} 个请求,间隔 {interval} 秒",
"rateLimitReadyTooltip": "可以发送请求,间隔 {interval} 秒",
"rateLimitWaitingTooltip": "还需等待 {seconds} 秒,间隔 {interval} 秒"
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Potential over-nesting: generic rate-limit tooltips should share the settings keyset

The three tooltip keys under input.* (rateLimitQueueTooltip, rateLimitReadyTooltip, rateLimitWaitingTooltip) duplicate the exact text you added again under the top-level rateLimit.* object (Lines 49-53).
Keeping the same literals in two different branches of the tree is error-prone.

Either (a) keep them only once under rateLimit.* and reference that in code, or (b) drop the second copy if the component already resolves input.*.

🤖 Prompt for AI Agents
In src/renderer/src/i18n/zh-CN/chat.json around lines 14 to 20, the tooltip keys
under input.* duplicate the same text found under the top-level rateLimit.* keys
at lines 49-53, causing redundancy and potential inconsistency. To fix this,
remove the duplicated tooltip keys from the input.* section and ensure the code
references the single source of truth under rateLimit.* for these tooltips, or
if the component already resolves input.*, drop the rateLimit.* duplicates
instead to avoid maintaining two copies of the same literals.

Comment on lines +14 to 20
"historyPlaceholder": "(按 Tab 填充)",
"rateLimitQueue": "佇列 {count}",
"rateLimitWait": "等待 {seconds}秒",
"rateLimitQueueTooltip": "佇列中有 {count} 個請求,間隔 {interval} 秒",
"rateLimitReadyTooltip": "可以發送請求,間隔 {interval} 秒",
"rateLimitWaitingTooltip": "還需等待 {seconds} 秒,間隔 {interval} 秒"
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove duplicate tooltip literals

Same remark as for zh-CN: tooltip keys are present under both input.* and rateLimit.*.
Keep a single source to avoid divergent translations.

🤖 Prompt for AI Agents
In src/renderer/src/i18n/zh-HK/chat.json around lines 14 to 20, there are
duplicate tooltip literals under both input.* and rateLimit.* keys. To fix this,
identify the tooltip keys that are duplicated and consolidate them by keeping
only one source of each tooltip literal. Remove the duplicates from either
input.* or rateLimit.* to ensure translations remain consistent and avoid
divergence.

@zerob13 zerob13 merged commit b8577e5 into ThinkInAIXYZ:dev Aug 4, 2025
2 checks passed
zerob13 added a commit that referenced this pull request Aug 6, 2025
* fix: prevent tooltip from overlapping HTML preview in ArtifactDialog (#683)

* fix: unwatch is not a function error in knowledge settings components (#684)

* fix: resolve message editing and markdown rendering issues (#685)

* docs: update Electron-Vite link format to full URL (#687)

* fix: built-in knowledge settings not loaded when first entering (#688)

* fix: enhance lifecycle management by adding unmounted hook and MCP state watcher

* fix: improve knowledge destruction confirmation and enhance unwatch handling in settings

* fix: resolve database lock issue in data reset functionality (#689)

* feat: implement provider-based request rate limit (#686)

* feat: implement provider-based request rate limit

* feat: add input validation with confirmation dialog

* refactor: merge RateLimitPresenter into LLMProviderPresenter

* Optimize/builtin knowledge interaction (#690)

* feat: improve builtInKnowledge file icon style

* feat: add builtInKnowledge resume icon tooltip

* feat: format code

* Update src/renderer/src/i18n/fa-IR/settings.json

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update src/renderer/src/i18n/ja-JP/settings.json

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update src/renderer/src/i18n/fr-FR/settings.json

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update settings.json

* Update src/renderer/src/i18n/fr-FR/settings.json

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* fix: Fix French and Korean translations for pause and delete file confirmation messages

---------

Co-authored-by: sqsyli <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* fix: fixed dirty data problems (#691)

fixed the old session tool error caused by
fixed the problem that the tool list was not reset when creating a new session

* feat: add mentions support in deeplink (#692)

* feat: add mentions support in chat input and store

* style: format imports and clean up code for better readability

* fix: improve mentions handling in handleStart method

* fix: prompt attach files (#695)

* fix: update file icon handling and improve file upload process

* fix: remove redundant FileItem interface definition in PromptSetting.vue

* fix: implement confirmation dialog for prompt deletion in PromptSetting.vue

* fix: streamline file upload process by removing redundant file content reading logic

* feat: add npm registry caching and optimization system (#697)

* fix: resolve input focus loss issue after sending messages (#694)

* fix: resolve deeplink message not displaying when triggered from settings page (#699)

* fix: update current tools assignment to include tool names when MCP tools are not enabled (#698)

* feat: add support for new models  (#700)

* chore: update ollama model settings

* chore: add support for gpt-oss

* chore: bump to 0.2.9

* fix: refine #698 (#701)

* fix: update current tools assignment to include tool names when MCP tools are not enabled

* fix: update enabled MCP tools assignment to include tool names on session switch

* feat: add Claude Opus 4.1 support and fix anthropic model list dialog (#702)

---------

Co-authored-by: yyhhyyyyyy <[email protected]>
Co-authored-by: cycleccc <[email protected]>
Co-authored-by: hllshiro <[email protected]>
Co-authored-by: sqsyli <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature] 能否增加一个api模型请求频率限制?
2 participants