Add support for using whisper models from Huggingface by specifying the model id. (#334)

* Add support for downloading CTranslate-converted models from Huggingface.

* Update utils.py to pass Flake8.

* Update utils.py to pass black.

* Remove redundant usage instructions.

* Apply suggestions from code review

Co-authored-by: Guillaume Klein <guillaumekln@users.noreply.github.com>

---------

Co-authored-by: Guillaume Klein <guillaumekln@users.noreply.github.com>
This commit is contained in:
zh-plus
2023-07-03 23:40:10 +08:00
committed by GitHub
parent c0d93d0829
commit c7cb2aa8d4
3 changed files with 29 additions and 10 deletions

View File

@@ -161,6 +161,18 @@ ct2-transformers-converter --model openai/whisper-large-v2 --output_dir whisper-
Models can also be converted from the code. See the [conversion API](https://opennmt.net/CTranslate2/python/ctranslate2.converters.TransformersConverter.html).
### Load a converted model
1. Directly load the model from a local directory:
```python
model = faster_whisper.WhisperModel('whisper-large-v2-ct2')
```
2. [Upload your model to the Hugging Face Hub](https://huggingface.co/docs/transformers/model_sharing#upload-with-the-web-interface) and load it from its name:
```python
model = faster_whisper.WhisperModel('username/whisper-large-v2-ct2')
```
## Comparing performance against other implementations
If you are comparing the performance against other Whisper implementations, you should make sure to run the comparison with similar settings. In particular: