Martin Warnaar commited on
Commit
3c9afe6
·
unverified ·
1 Parent(s): 71e17f4

readme : better wording (#1064)

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -313,7 +313,7 @@ For more information about the Core ML implementation please refer to PR [#566](
313
 
314
  ## NVIDIA GPU support via cuBLAS
315
 
316
- With NVIDIA cards, the Encoder processing can be offloaded to the GPU to a large extend through cuBLAS.
317
  First, make sure you have installed `cuda`: https://developer.nvidia.com/cuda-downloads
318
 
319
  Now build `whisper.cpp` with cuBLAS support:
@@ -325,7 +325,7 @@ WHISPER_CUBLAS=1 make -j
325
 
326
  ## OpenCL GPU support via CLBlast
327
 
328
- For cards and integrated GPUs that support OpenCL, the Encoder processing can be largely offloaded to the GPU through CLBlast. This is especially useful for users with AMD APU's or low end devices for up to ~2x speedup.
329
 
330
  First, make sure you have installed `CLBlast` for your OS or Distribution: https://github.com/CNugteren/CLBlast
331
 
 
313
 
314
  ## NVIDIA GPU support via cuBLAS
315
 
316
+ With NVIDIA cards the Encoder processing can to a large extent be offloaded to the GPU through cuBLAS.
317
  First, make sure you have installed `cuda`: https://developer.nvidia.com/cuda-downloads
318
 
319
  Now build `whisper.cpp` with cuBLAS support:
 
325
 
326
  ## OpenCL GPU support via CLBlast
327
 
328
+ For cards and integrated GPUs that support OpenCL, the Encoder processing can be largely offloaded to the GPU through CLBlast. This is especially useful for users with AMD APUs or low end devices for up to ~2x speedup.
329
 
330
  First, make sure you have installed `CLBlast` for your OS or Distribution: https://github.com/CNugteren/CLBlast
331