Spaces:
Running
Running
Konosuke Sakai
commited on
docs : replace Core ML with OpenVINO (#2686)
Browse files
README.md
CHANGED
|
@@ -293,7 +293,7 @@ This can result in significant speedup in encoder performance. Here are the inst
|
|
| 293 |
The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get
|
| 294 |
cached for the next run.
|
| 295 |
|
| 296 |
-
For more information about the
|
| 297 |
|
| 298 |
## NVIDIA GPU support
|
| 299 |
|
|
|
|
| 293 |
The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get
|
| 294 |
cached for the next run.
|
| 295 |
|
| 296 |
+
For more information about the OpenVINO implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037).
|
| 297 |
|
| 298 |
## NVIDIA GPU support
|
| 299 |
|