Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

whisper-20hrs-random

This model is a fine-tuned version of openai/whisper-large-v2 on the JASMIN-CGN dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4303
  • Wer: 22.4947

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 48
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 49
  • num_epochs: 3.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.0263 0.1543 25 1.2168 38.0448
1.0661 0.3086 50 1.1739 37.5583
0.9871 0.4630 75 1.1019 36.3337
0.8873 0.6173 100 1.0211 35.1126
0.8572 0.7716 125 0.9299 34.6831
0.7706 0.9259 150 0.8373 32.4219
0.7349 1.0802 175 0.7395 31.9455
0.6256 1.2346 200 0.6531 31.5127
0.5956 1.3889 225 0.5880 28.6141
0.5474 1.5432 250 0.5423 26.7555
0.5441 1.6975 275 0.5079 25.7490
0.5065 1.8519 300 0.4803 23.9541
0.5186 2.0062 325 0.4605 22.4477
0.4526 2.1605 350 0.4491 22.9241
0.4636 2.3148 375 0.4415 22.7229
0.4768 2.4691 400 0.4364 22.6692
0.4721 2.6235 425 0.4332 22.5081
0.4729 2.7778 450 0.4312 22.5115
0.4786 2.9321 475 0.4303 22.4947

Framework versions

  • PEFT 0.16.0
  • Transformers 4.52.0
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.2
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for greenw0lf/whisper-20hrs-random

Adapter
(277)
this model

Collection including greenw0lf/whisper-20hrs-random

Evaluation results