Replies: 3 comments
-
Hi, I'm facing a similar issue. Did you manage to find out why? |
Beta Was this translation helpful? Give feedback.
-
@monk1337 Did you select the correct language? |
Beta Was this translation helpful? Give feedback.
-
for me reducing the learning rate to a very low value worked to a degree. learning rate = 1.75e-8 here are my other params training_args = Seq2SeqTrainingArguments( |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I've recently created a dataset using speech-to-text APIs on custom documents. The dataset consists of 1,000 audio samples, with 700 designated for training and 300 for testing. In total, this equates to about 4 hours of audio, where each clip is approximately 30 seconds long.
I'm attempting to fine-tune the Whisper small model with the help of HuggingFace's script, following the tutorial they've provided Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers.
Before diving into the fine-tuning, I evaluated the WER on OpenAI's pre-trained model, which stood at WER = 23.078%.
However, as my fine-tuning progresses, I'm observing some unexpected behavior:
As visible, the Validation Loss and WER are both on the rise during the fine-tuning phase. I'm at a bit of a loss here. Why might this be happening? Any insights or recommendations would be greatly appreciated.
Thank you in advance!
Beta Was this translation helpful? Give feedback.
All reactions