Spaces:
Running
Running
Minor
Browse files- README.md +5 -3
- models/README.md +2 -2
- whisper.cpp +1 -1
README.md
CHANGED
|
@@ -12,7 +12,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
|
|
| 12 |
- Zero memory allocations at runtime
|
| 13 |
- Runs on the CPU
|
| 14 |
- [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h)
|
| 15 |
-
- Supported platforms: Linux, Mac OS (Intel and Arm), Raspberry Pi, Android
|
| 16 |
|
| 17 |
## Usage
|
| 18 |
|
|
@@ -34,7 +34,7 @@ For a quick demo, simply run `make base.en`:
|
|
| 34 |
|
| 35 |
```java
|
| 36 |
$ make base.en
|
| 37 |
-
cc -O3 -std=c11
|
| 38 |
c++ -O3 -std=c++11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread -c whisper.cpp
|
| 39 |
c++ -O3 -std=c++11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread main.cpp whisper.o ggml.o -o main
|
| 40 |
./main -h
|
|
@@ -248,6 +248,8 @@ The original models are converted to a custom binary format. This allows to pack
|
|
| 248 |
- vocabulary
|
| 249 |
- weights
|
| 250 |
|
| 251 |
-
You can download the converted models using the [download-ggml-model.sh](download-ggml-model.sh) script
|
|
|
|
|
|
|
| 252 |
|
| 253 |
For more details, see the conversion script [convert-pt-to-ggml.py](convert-pt-to-ggml.py) or the README in [models](models).
|
|
|
|
| 12 |
- Zero memory allocations at runtime
|
| 13 |
- Runs on the CPU
|
| 14 |
- [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h)
|
| 15 |
+
- Supported platforms: Linux, Mac OS (Intel and Arm), Windows (MinGW), Raspberry Pi, Android
|
| 16 |
|
| 17 |
## Usage
|
| 18 |
|
|
|
|
| 34 |
|
| 35 |
```java
|
| 36 |
$ make base.en
|
| 37 |
+
cc -O3 -std=c11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread -c ggml.c
|
| 38 |
c++ -O3 -std=c++11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread -c whisper.cpp
|
| 39 |
c++ -O3 -std=c++11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread main.cpp whisper.o ggml.o -o main
|
| 40 |
./main -h
|
|
|
|
| 248 |
- vocabulary
|
| 249 |
- weights
|
| 250 |
|
| 251 |
+
You can download the converted models using the [download-ggml-model.sh](download-ggml-model.sh) script or from here:
|
| 252 |
+
|
| 253 |
+
https://ggml.ggerganov.com
|
| 254 |
|
| 255 |
For more details, see the conversion script [convert-pt-to-ggml.py](convert-pt-to-ggml.py) or the README in [models](models).
|
models/README.md
CHANGED
|
@@ -4,14 +4,14 @@ The [original Whisper PyTorch models provided by OpenAI](https://github.com/open
|
|
| 4 |
have been converted to custom `ggml` format in order to be able to load them in C/C++. The conversion has been performed using the
|
| 5 |
[convert-pt-to-ggml.py](convert-pt-to-ggml.py) script. You can either obtain the original models and generate the `ggml` files
|
| 6 |
yourself using the conversion script, or you can use the [download-ggml-model.sh](download-ggml-model.sh) script to download the
|
| 7 |
-
already converted models.
|
| 8 |
|
| 9 |
Sample usage:
|
| 10 |
|
| 11 |
```java
|
| 12 |
$ ./download-ggml-model.sh base.en
|
| 13 |
Downloading ggml model base.en ...
|
| 14 |
-
models/ggml-base.en.bin 100%[=============================================>] 141.11M 5.41MB/s in 22s
|
| 15 |
Done! Model 'base.en' saved in 'models/ggml-base.en.bin'
|
| 16 |
You can now use it like this:
|
| 17 |
|
|
|
|
| 4 |
have been converted to custom `ggml` format in order to be able to load them in C/C++. The conversion has been performed using the
|
| 5 |
[convert-pt-to-ggml.py](convert-pt-to-ggml.py) script. You can either obtain the original models and generate the `ggml` files
|
| 6 |
yourself using the conversion script, or you can use the [download-ggml-model.sh](download-ggml-model.sh) script to download the
|
| 7 |
+
already converted models from https://ggml.ggerganov.com
|
| 8 |
|
| 9 |
Sample usage:
|
| 10 |
|
| 11 |
```java
|
| 12 |
$ ./download-ggml-model.sh base.en
|
| 13 |
Downloading ggml model base.en ...
|
| 14 |
+
models/ggml-base.en.bin 100%[=============================================>] 141.11M 5.41MB/s in 22s
|
| 15 |
Done! Model 'base.en' saved in 'models/ggml-base.en.bin'
|
| 16 |
You can now use it like this:
|
| 17 |
|
whisper.cpp
CHANGED
|
@@ -2387,7 +2387,7 @@ int whisper_full(
|
|
| 2387 |
// print the prompt
|
| 2388 |
//printf("\n\n");
|
| 2389 |
//for (int i = 0; i < prompt.size(); i++) {
|
| 2390 |
-
// printf("%s: prompt[%d] = %s\n", __func__, i, vocab.id_to_token[prompt[i]].c_str());
|
| 2391 |
//}
|
| 2392 |
//printf("\n\n");
|
| 2393 |
|
|
|
|
| 2387 |
// print the prompt
|
| 2388 |
//printf("\n\n");
|
| 2389 |
//for (int i = 0; i < prompt.size(); i++) {
|
| 2390 |
+
// printf("%s: prompt[%d] = %s\n", __func__, i, ctx->vocab.id_to_token[prompt[i]].c_str());
|
| 2391 |
//}
|
| 2392 |
//printf("\n\n");
|
| 2393 |
|