large-v2 figure and arxiv url update
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
# Whisper
|
||||
|
||||
[[Blog]](https://openai.com/blog/whisper)
|
||||
[[Paper]](https://cdn.openai.com/papers/whisper.pdf)
|
||||
[[Paper]](https://arxiv.org/abs/2212.04356)
|
||||
[[Model card]](model-card.md)
|
||||
[[Colab example]](https://colab.research.google.com/github/openai/whisper/blob/master/notebooks/LibriSpeech.ipynb)
|
||||
|
||||
@@ -66,7 +66,7 @@ There are five model sizes, four with English-only versions, offering speed and
|
||||
|
||||
For English-only applications, the `.en` models tend to perform better, especially for the `tiny.en` and `base.en` models. We observed that the difference becomes less significant for the `small.en` and `medium.en` models.
|
||||
|
||||
Whisper's performance varies widely depending on the language. The figure below shows a WER breakdown by languages of Fleurs dataset, using the `large` model. More WER and BLEU scores corresponding to the other models and datasets can be found in Appendix D in [the paper](https://cdn.openai.com/papers/whisper.pdf).
|
||||
Whisper's performance varies widely depending on the language. The figure below shows a WER breakdown by languages of Fleurs dataset, using the `large-v2` model. More WER and BLEU scores corresponding to the other models and datasets can be found in Appendix D in [the paper](https://arxiv.org/abs/2212.04356).
|
||||
|
||||

|
||||
|
||||
|
||||
Reference in New Issue
Block a user