Skip to content

Releases: machinewrapped/llm-subtrans

GPT-Subtrans is now LLM-Subtrans

16 Aug 12:51
Compare
Choose a tag to compare

Since we have supported many providers besides OpenAI's GPT models for a long time the name has been kind of inappropriate. The addition of OpenRouter as the default provider seems like a good time to officially change the name to the more inclusive LLM-Subtrans.

Settings should be migrated automatically to the new app data location.

The repository will be renamed to reflect the new name as well - which is probably going to cause chaos, but fingers crossed!

Internally the app is now using the latest version of Python (3.13) on Windows. Looks like MS Defender might be a bit suspicious about the change - hopefully only temporarily (I've submitted it for scanning).

This also comes with a new logo (courtesy of Qwen3) and icon set so that we're no longer misappropriating built-in Qt icons.

LLM-Subtrans Logo (full size)

Support for OpenRouter

15 Aug 15:27
Compare
Choose a tag to compare

Added OpenRouter as a translation provider. OpenRouter is an aggregator that provides access to a wide variety of models with a single API key. This includes a number of quite capable models that are free to use, e.g.

  • google/gemini-2.0-flash-exp:free
  • deepseek/deepseek-chat-v3-0324:free
  • qwen/qwen3-235b-a22b:free
  • meta-llama/llama-3.1-405b-instruct:free

Since OpenRouter provide access to so many models the model list is grouped by model family, and by default filtered to only show models from the "Translation" category - though this excludes many models that are actually very good for translation, including most if not all of the free options.

You can choose to let OpenRouter select the model to use automatically, based on criteria you can configure in their dashboard (e.g. which providers to include or exclude, and whether to prioritise price or speed). This is the default setting, but obviously something of a gamble.

If you are installing from source, OpenRouter does not require any additional dependencies and is now the default provider for llm-subtrans

Support for GPT-5 and app localization

12 Aug 18:47
Compare
Choose a tag to compare

** Re-released with a fix for English locale **

Added support for the GPT-5 models gpt-5, gpt-5-mini and gpt-5-nano. These require OpenAI's newer Responses API rather than the old Chat API, so the client has been updated. Updated the documentation with recommendations on the model to use... spoiler alert: not gpt-5-nano!

Something I've wanted to do for a long time is add localization to GPT-Subtrans so that you can use it in your own language - it also seemed like a good opportunity to try out AI agent workflows, so this feature was a collaboration between:

  • Claude Code
  • Cursor Agent
  • Github Copilot Agent
  • Gemini Code Assist (mainly as reviewer)
  • Gemini 2.5 Pro (translator)
  • ChatGPT (translator)
  • @machinewrapped (mainly the $ for API credits and commiserating with Gemini when it repeatedly failed to write working code)

Localization is provided for Spanish and Czech to start with - I encourage users to contribute localizations to their own locale. The process is quite straightforward if you can install from source, and documented in docs/localization_contributing.md. I have no current plans to add further locales myself, but could probably be persuaded if there was a volunteer to test it.

Note that the PR was over 10,000 lines of code, most written by AI, so there are likely to be bugs in the initial release, but the standard load/translate/edit operations are all tested and working.

What's Changed

New Contributors

Full Changelog: v1.1.2...v1.2.0

Fix for random crash

10 May 12:40
Compare
Choose a tag to compare

Fixed a random crash when writing to the log window, it's been there for a long time but became much more likely to crash in recent versions after an update to the Qt GUI library - particularly when opening a project file.

Fix for Save Instructions losing task type

23 Apr 20:05
Compare
Choose a tag to compare

Minor fix: the new task_type field for custom instructions was not being saved to the instruction file when saved via Edit Instructions.

Improved "Improve Quality" instructions

18 Apr 11:52
Compare
Choose a tag to compare

Updated the "Improve Quality" instructions to include multilingual examples, to show the model that it isn't expected to translate to English.

Made the request/response format more flexible, so that the model isn't required to respond with a "Translation" for each line, to further emphasize the nature of the task.

update Updated the release to include the task type in the Edit Instructions dialog so that it can be checked/changed.

Known issue: Sometimes the GUI crashes when loading a project. It seems to be most likely on first run.

What's Changed

  • Added Task type to instructions files by @machinewrapped in #236
  • Updated dependencies to latest versions

Full Changelog: v1.0.9...v1.1.0

Claude 3.7 Sonnet Thinking Mode

03 Mar 15:20
5e7b3ef
Compare
Choose a tag to compare

Added support for Claude 3.7 Sonnet with extended thinking mode - enable "Thinking" in the provider options and specify the number of tokens Claude is allowed to use for thinking. Maximum output tokens have been increased from 4096 to 8192 with Claude 3.7 so that setting can be increased, and some tokens set aside for thinking (minimum is 1024).

If thinking is enabled the model's thoughts for each batch can be seen by double-clicking the batch and selecting the "Reasoning" tab.

image

The Claude API doesn't indicate which models support thinking so you'll need to manually disable Thinking in settings if you go back to one of the earlier Claude models.

With this release the available models are finally retrieved from the Claude API instead of being baked into the app. Any previous model setting will be reset because it now uses user-friendly model names rather than their IDs.

Fixed Edit Instructions

19 Feb 17:34
Compare
Choose a tag to compare

Fixed a regression where the "Edit Instructions" button would reset the instructions to default.

  • Gemini could potentially return multiple response candidates so try to at least choose one that has content. Also handles responses with no content more gracefully, if there are none that do. This probably implies Gemini is censoring the request for that batch.

Changed "Local Server" to "Custom Server"

15 Feb 10:09
Compare
Choose a tag to compare

NEW: repackaged the Windows zip because of a false positive on some Antivirus software. Includes some tweaks to the themes.

Renamed "Local Server" provider to "Custom Server" - it was never a requirement that the server be local, so this makes it clearer that the provider can be used with any OpenAI-compatible API.

Added a max_completion_tokens option for Custom Server, since OpenAI are no longer accepting max_tokens for some of their own models. You should probably set one or the other or neither, not both.

Plus several arguably more important fixes for non-Windows platforms:

  • Fixed the MacOS package builder
  • Updated to latest PySide6 GUI modules
  • Force the Qt theme to Fusion Light, for cross-platform compatibility with app themes
  • Added a light Large theme

Removed boto3 from the packaged build to reduce the size - Bedrock is pretty niche, and if you can handle setting up AWS then you can definitely handle installing gpt-subtrans from source!

Fixed retries, updated instructions

13 Feb 17:21
Compare
Choose a tag to compare

Some of the APIs are unreliable at the moment so requests quite often need to be retried. I've cleaned up the retry mechanism for OpenAI-based clients (which includes DeepSeek) and fixed the Gemini retry so that both will retry in the event of the common API failures I am seeing.

I also added a reuse_client option for DeepSeek that defaults to False, meaning that a new connection will be established for each translation request. I noticed that the first request succeeds much more often than the second+ - creating a new client for each request seems to improve the odds of it being successful.

I also updated the custom instructions for OCR Errors and Whisper to match the format of the newer default instructions, as in my experience it seems to produce better results than the older instructions.

A MacOS version will be provided if I can get it to build, otherwise check previous versions to find one or install from source.