模型:
kingabzpro/wav2vec2-large-xls-r-300m-Urdu
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set:
python eval.py --model_id kingabzpro/wav2vec2-large-xls-r-300m-Urdu --dataset mozilla-foundation/common_voice_8_0 --config ur --split test
from datasets import load_dataset, Audio from transformers import pipeline model = "kingabzpro/wav2vec2-large-xls-r-300m-Urdu" data = load_dataset("mozilla-foundation/common_voice_8_0", "ur", split="test", streaming=True, use_auth_token=True) sample_iter = iter(data.cast_column("path", Audio(sampling_rate=16_000))) sample = next(sample_iter) asr = pipeline("automatic-speech-recognition", model=model) prediction = asr(sample["path"]["array"], chunk_length_s=5, stride_length_s=1) prediction # => {'text': 'اب یہ ونگین لمحاتانکھار دلمیں میںفوث کریلیا اجائ'}
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
3.6398 | 30.77 | 400 | 3.3517 | 1.0 | 1.0 |
2.9225 | 61.54 | 800 | 2.5123 | 1.0 | 0.8310 |
1.2568 | 92.31 | 1200 | 0.9699 | 0.6273 | 0.2575 |
0.8974 | 123.08 | 1600 | 0.9715 | 0.5888 | 0.2457 |
0.7151 | 153.85 | 2000 | 0.9984 | 0.5588 | 0.2353 |
0.6416 | 184.62 | 2400 | 0.9889 | 0.5607 | 0.2370 |
Without LM | With LM (run ./eval.py ) |
---|---|
52.03 | 39.89 |