Konosuke Sakai commited on
Commit
db94c1c
·
unverified ·
1 Parent(s): e8833ea

docs : replace Core ML with OpenVINO (#2686)

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -293,7 +293,7 @@ This can result in significant speedup in encoder performance. Here are the inst
293
  The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get
294
  cached for the next run.
295
 
296
- For more information about the Core ML implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037).
297
 
298
  ## NVIDIA GPU support
299
 
 
293
  The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get
294
  cached for the next run.
295
 
296
+ For more information about the OpenVINO implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037).
297
 
298
  ## NVIDIA GPU support
299