Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
Duplicated from
natasa365/whisper.cpp
Xenobd
/
whisper.cpp
like
0
Running
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
7631e20
whisper.cpp
/
ggml
/
src
6.35 MB
100 contributors
History:
497 commits
midnight
cmake : fix compile assumptions for power9/etc (#2777)
4683df3
unverified
11 months ago
ggml-amx
ggml : adapt AMX to tensor->grad removal (llama/0)
about 1 year ago
ggml-blas
ggml : add support for dynamic loading of backends (llama/10469)
about 1 year ago
ggml-cann
llama : add Qwen2VL support + multimodal RoPE (llama/10361)
about 1 year ago
ggml-cpu
cmake : fix compile assumptions for power9/etc (#2777)
11 months ago
ggml-cuda
CUDA: fix Volta FlashAttention logic (llama/11615)
11 months ago
ggml-hip
CUDA: use mma PTX instructions for FlashAttention (llama/11583)
11 months ago
ggml-kompute
llama : add Qwen2VL support + multimodal RoPE (llama/10361)
about 1 year ago
ggml-metal
metal: Handle null returned from MTLCreateSystemDefaultDevice() (llama/11441)
11 months ago
ggml-musa
CUDA: use mma PTX instructions for FlashAttention (llama/11583)
11 months ago
ggml-opencl
ggml : add opencl backend (skip) (llama/10693)
11 months ago
ggml-rpc
rpc : better caching of the base buffer pointer (llama/11331)
11 months ago
ggml-sycl
SYCL : SOFTMAX F16 mask support and other fixes (llama/11261)
11 months ago
ggml-vulkan
vulkan: implement initial support for IQ2 and IQ3 quantizations (llama/11360)
11 months ago
CMakeLists.txt
Safe
11.8 kB
`ci`: use sccache on windows instead of ccache (llama/11545)
11 months ago
ggml-alloc.c
Safe
38.5 kB
CUDA: backwards pass for misc. ops, add tests (llama/11257)
11 months ago
ggml-backend-impl.h
Safe
12 kB
rpc : early register backend devices (llama/11262)
11 months ago
ggml-backend-reg.cpp
Safe
17.2 kB
ggml : allow loading backend with env variable (ggml/1059)
12 months ago
ggml-backend.cpp
Safe
77.5 kB
ggml-backend : only offload from host buffers (fix) (llama/11124)
12 months ago
ggml-common.h
Safe
133 kB
CUDA: rename macros to avoid conflicts with WinAPI (llama/10736)
about 1 year ago
ggml-impl.h
Safe
18.3 kB
GGUF: C++ refactor, backend support, misc fixes (llama/11030)
12 months ago
ggml-opt.cpp
Safe
31.7 kB
ggml-opt: fix data corruption (ggml/1022)
about 1 year ago
ggml-quants.c
Safe
214 kB
ggml : refactor online repacking (llama/10446)
about 1 year ago
ggml-quants.h
Safe
8.34 kB
ggml : build backends as libraries (llama/10256)
about 1 year ago
ggml-threading.cpp
Safe
250 Bytes
ggml : build backends as libraries (llama/10256)
about 1 year ago
ggml-threading.h
Safe
198 Bytes
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797)
about 1 year ago
ggml.c
Safe
209 kB
CPU/CUDA: fix (GQA) mul mat back, add CUDA support (llama/11380)
11 months ago
gguf.cpp
Safe
45 kB
cmake : add sanitizer flags for llama.cpp (llama/11279)
11 months ago