From 2b7be470415d5e39689825d814a5b265114903d2 Mon Sep 17 00:00:00 2001 From: Guillaume Klein Date: Fri, 24 Mar 2023 09:15:05 +0100 Subject: [PATCH] Update README.md --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 549c62d..be92753 100644 --- a/README.md +++ b/README.md @@ -2,13 +2,13 @@ # Faster Whisper transcription with CTranslate2 -This repository demonstrates how to implement the Whisper transcription using [CTranslate2](https://github.com/OpenNMT/CTranslate2/), which is a fast inference engine for Transformer models. +**faster-whisper** is a reimplementation of OpenAI's Whisper model using [CTranslate2](https://github.com/OpenNMT/CTranslate2/), which is a fast inference engine for Transformer models. This implementation is up to 4 times faster than [openai/whisper](https://github.com/openai/whisper) for the same accuracy while using less memory. The efficiency can be further improved with 8-bit quantization on both CPU and GPU. ## Benchmark -For reference, here's the time and memory usage that are required to transcribe **13 minutes** of audio using different implementations: +For reference, here's the time and memory usage that are required to transcribe [**13 minutes**](https://www.youtube.com/watch?v=0u7tTptBo9I) of audio using different implementations: * [openai/whisper](https://github.com/openai/whisper)@[6dea21fd](https://github.com/openai/whisper/commit/6dea21fd7f7253bfe450f1e2512a0fe47ee2d258) * [whisper.cpp](https://github.com/ggerganov/whisper.cpp)@[3b010f9](https://github.com/ggerganov/whisper.cpp/commit/3b010f9bed9a6068609e9faf52383aea792b0362) @@ -38,6 +38,8 @@ For reference, here's the time and memory usage that are required to transcribe ## Installation +The module can be installed from [PyPI](https://pypi.org/project/faster-whisper/): + ```bash pip install faster-whisper ```