whisper.cpp / ggml

Commit History

ggml : add more generic custom op, remove deprecated custom ops (ggml/1183)
ba7a5f8

Diego Devesa commited on

Revert "sycl:remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor" (llama/12812)
3d4b079

Neo Zhang Jianyu commited on

opencl: better identify Adreno GPU (llama/12760)
5560cd6

lhez commited on

cuda : fix HIP and MUSA BF16 (llama/0)
6dc5583

ggerganov HF Staff commited on

sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor (llama/12734)
7d3e668

jeffzhou2000 commited on

CANN: fix typo in ggml-cann (llama/12733)
65ced74

jeffzhou2000 commited on

CANN: Refactor to reduce duplicate code (llama/12731)
44ac81c

hipudding commited on

musa: fix compilation warnings in mp_22/31 (llama/12780)
090ad80

R0CKSTAR commited on

vulkan: fix NaN issue in flash attention shader (llama/12776)
77d7613

jeffbolznv commited on

vulkan: Use unclamped loads for flash attention mask (llama/12720)
a76ef69

jeffbolznv commited on

Vulkan: Tune Vulkan mmq int dot shader for performance (llama/12767)
b3bf710

OccamRazor commited on

sycl: allow ggml-sycl configuration and compilation using Visual Studio project/solution (llama/12625)
27cbcc9

Nicolò Scipione commited on

cmake: fix ggml-shaders-gen compiler paths containing spaces (llama/12747)
1c89b7d

Ronny Brendel commited on

vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (llama/12630)
ee422be

jeffbolznv commited on

vulkan: set cmake minimum and project name in vulkan-shaders (llama/12744)
2459781

jeffbolznv commited on

CUDA: Prefer vector flash decoding kernel for Gemma models (llama/12738)
5d7a13f

Gaurav Garg JohannesGaessler commited on

vulkan: Fix missing cmake logic for dot product extension (llama/12721)
7a1e8f8

jeffbolznv commited on

fix MUSA compiler warning (llama/12704)
8d43aa6

a3sh commited on

CANN: Support operator SIN COS ARGMAX (llama/12709)
904aaf5

Chenguang Li noemotiovon commited on

Simplify and improve CUDA graphs through use of indirect copy pointers (llama/9017)
a2fdbe6

Alan Gray slaren commited on

CANN: Fix failed test cases (llama/12708)
7d5f3d4

hipudding commited on

opencl: use `max_alloc_size` in backend ctx instead of querying again (llama/12705)
3847456

lhez commited on

vulkan: Implement split_k for coopmat2 flash attention. (llama/12627)
5ab06d6

jeffbolznv commited on

cmake: remove caching from vulkan coopmat checks (llama/12719)
fac18c1

bandoti commited on

vulkan: Implement grouped query attention in the coopmat2 FA shader (llama/12559)
e7bebe6

jeffbolznv commited on

Vulkan: Fix mmq int dot float cache size (llama/12722)
1cecf5d

OccamRazor commited on

llama : add option to override model tensor buffers (llama/11397)
3d000b6

Diego Devesa commited on

ggml : simplify Arm fp16 CPU logic (ggml/1177)
fb13b88

ggerganov HF Staff commited on

CUDA: don't convert BF16 weights to FP32 (ggml/1174)
332bcaf

Sigbjørn Skjæret commited on

cpu: move all the operators into a separate c++ file (except mul_mat) (ggml/1167)
0754d43

cmdr2 Diego Devesa commited on

get_rows and dup optimization (llama/12671)
ffa5f14

Chenguang Li noemotiovon hipudding commited on

opencl : fix memory allocation size (llama/12649)
b00a8a9

Sparkleholic commited on

metal : use F32 prec in FA kernels (llama/12688)
a49f5c2

ggerganov HF Staff commited on

Fix clang warning in gguf_check_reserved_keys (llama/12686)
a00d66c

R0CKSTAR commited on

vulkan: fix build when glslc doesn't support coopmat (llama/12683)
f91eb88

Wagner Bruna commited on

SYCL: Rename oneMKL to oneMath (llama/12192)
d89f6fa

Romain Biessy commited on

SYCL: switch to SYCL namespace (llama/12674)
fe06fa2

Akarshan Biswas commited on

ggml : faster ssm scan (llama/10558)
a18cd16

a3sh commited on

Vulkan: Add DP4A MMQ and Q8_1 quantization shader (llama/12135)
06ec111

OccamRazor commited on

cmake : fix whitespace (llama/0)
a596e84

ggerganov HF Staff commited on

SYCL: Remove misleading ggml_sycl_op_flatten function (llama/12387)
0a9c73a

Akarshan Biswas commited on

metal : use constexpr in FA kernels + fix typedef (llama/12659)
c699617

ggerganov HF Staff commited on

musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (llama/12611)
12bb60d

R0CKSTAR commited on

cmake : fix ccache conflict (llama/12522)
22dfdf6

Jay commited on

cpu : rm unused variable (ggml/1166)
86eb3af

ngxson HF Staff commited on

cpu: de-duplicate some of the operators and refactor (ggml/1144)
09f2f18

cmdr2 commited on

cmake: improve Vulkan cooperative matrix support checks (#2966)
4be7f68
unverified

Sandro Hanea Sandro Hanea commited on

metal : improve FA + improve MoE (llama/12612)
04a3389

ggerganov HF Staff commited on

vulkan: fix coopmat shader generation when cross-compiling (llama/12272)
7585c4a

Icenowy Zheng bandoti commited on

llamafile : ppc64le GEMV forwarding for FP32. (llama/12594)
1843f18

amritahs-ibm commited on