模型:
techiaith/wav2vec2-xlsr-ft-cy
Fine-tuned facebook/wav2vec2-large-xlsr-53 on the Welsh Common Voice version 11 dataset .
Source code and scripts for training acoustic and KenLM language models, as well as examples of inference in transcribing or a self-hosted API service, can be found at https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy .
The wav2vec2-xlsr-ft-cy (acoustic) model can be used directly (without a language model) as follows:
import torch import torchaudio import librosa from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor processor = Wav2Vec2Processor.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy") model = Wav2Vec2ForCTC.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy") audio, rate = librosa.load(audio_file, sr=16000) inputs = processor(audio, sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits # greedy decoding predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids))
See https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy/releases/tag/22.10 for more details and examples of a KenLM usage with the Parlance PyTorch CTC decode bindings library: https://github.com/parlance/ctcdecode
According to the Welsh Common Voice version 11 test set, the WER of techiaith/wav2vec2-xlsr-ft-cy standalone is 6.04%
When assisted by the KenLM language model the same test produces a WER of 4.05%
See: https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy/blob/main/train/python/evaluate.py