whisper.cpp / ggml /src /ggml-vulkan

Commit History

Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (llama/14249)
08debcd

OccamRazor commited on

cmake: remove shader-gen step-targets from ggml-vulkan (llama/14226)
b7a7257

bandoti commited on

cmake: clean up external project logic for vulkan-shaders-gen (llama/14179)
bc8b1f7

bandoti commited on

vulkan: mutex around vkQueueSubmit (llama/14127)
ef3a7d0

jeffbolznv commited on

vulkan: Better thread-safety for command pools/buffers (llama/14116)
fdc26e7

jeffbolznv commited on

vulkan: Track descriptor pools/sets per-context (llama/14109)
855a3bf

jeffbolznv commited on

Vulkan: Don't default to CPU device (like llvmpipe), even if no other device is available, to allow fallback to CPU backend (llama/14099)
dcb106f

OccamRazor commited on

vulkan: Enable VK_KHR_cooperative_matrix extension for Intel Xe2 GPUs (llama/14001)
e5107fe

rillomas commited on

vulkan: automatically deduce size of push constants (llama/13936)
00a9e2f

jeffbolznv commited on

ggml-vulkan: adds support for op CONV_TRANSPOSE_1D (llama/13813)
32985b0

etasnadi commited on

vulkan: fix warnings in perf logger querypool code (llama/13937)
11bac96

jeffbolznv commited on

vulkan: use timestamp queries for GGML_VULKAN_PERF (llama/13817)
56ddc5b

jeffbolznv commited on

vulkan : Remove unexpected ; (ggml/1253)
c4be6fb

Kai Pastor commited on

vulkan: mark IM2COL as supporting non-contig (llama/13783)
09c03ad

jeffbolznv commited on

vulkan: support CPY from any type to itself (llama/13695)
f5f766b

jeffbolznv commited on

vulkan: Disable coopmat/coopmat2/bfloat extensions if glslc doesn't support it (llama/13696)
69679f5

jeffbolznv commited on

use LOG_WARN to replace `std::cerr` (llama/13657)
6975ec2

Judd commited on

vulkan: fix warnings (llama/13626)
8602d10

Eve commited on

Vulkan: Add f32 accumulator support to quantized mul mat to fix GLM4 32B incoherence (llama/13607)
dfa38af

OccamRazor commited on

cmake: use the current build config for vulkan-shaders-gen (llama/13595)
7681e32

Gilad S. commited on

vulkan: move common FA code to flash_attn_base.comp (llama/13556)
ad8b504

jeffbolznv commited on

vulkan: use scalar FA rather than coopmat2 when N==1 (llama/13554)
97d9aa6

jeffbolznv commited on

cmake: simplify vulkan shader test logic (llama/13263)
f8fd66d

bandoti commited on

vulkan: KHR_coopmat flash attention (llama/13506)
4d1bd4f

jeffbolznv commited on

vulkan: workaround FA compile failures on macos (llama/13517)
06833bc

jeffbolznv commited on

vulkan: scalar flash attention implementation (llama/13324)
3331abd

jeffbolznv commited on

vulkan: Allow up to 4096 elements for mul_mat_id row_ids (llama/13326)
53f8fee

jeffbolznv commited on

vulkan: Additional type support for unary, binary, and copy (llama/13266)
b9cb11e

jeffbolznv commited on

vulkan : fix lint (llama/0)
49be727

ggerganov commited on

vulkan: Add bfloat16 support (llama/12554)
b21f8a1

jeffbolznv commited on

vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader (llama/13191)
710fdcf

jeffbolznv commited on

vulkan : kernels for depthwise 2D convolution (CONV_2D_DW) (ggml/1204)
43d9f3e

Acly commited on

vulkan: use uint array index to avoid glslang bug (llama/13193)
fd2d86d

jeffbolznv commited on

vulkan: matmul gcn tuning (llama/13016)
ac537d2

Eve OccamRazor commited on

vulkan: support noncontiguous rms_norm (llama/13031)
e4d1f59

jeffbolznv commited on

graph : make FA compatible with MLA + add initial Metal kernels (llama/12953)
fb0d243

ggerganov commited on

vulkan: enable coopmat2 FA gqa and split_k optimizations more often (llama/12931)
f844153

jeffbolznv commited on

vulkan: use aligned loads for flash attention mask (llama/12853)
825889e

jeffbolznv commited on

vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (llama/12833)
4b7a407

jeffbolznv commited on

vulkan: Use fp16 for the flash attention P*V multiplication (llama/12783)
4e46f41

jeffbolznv commited on

ggml : add bilinear upscale support (ggml/1185)
4c5e449

Diego Devesa commited on

vulkan: fix NaN issue in flash attention shader (llama/12776)
77d7613

jeffbolznv commited on

vulkan: Use unclamped loads for flash attention mask (llama/12720)
a76ef69

jeffbolznv commited on

Vulkan: Tune Vulkan mmq int dot shader for performance (llama/12767)
b3bf710

OccamRazor commited on

cmake: fix ggml-shaders-gen compiler paths containing spaces (llama/12747)
1c89b7d

Ronny Brendel commited on

vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (llama/12630)
ee422be

jeffbolznv commited on

vulkan: set cmake minimum and project name in vulkan-shaders (llama/12744)
2459781

jeffbolznv commited on

vulkan: Fix missing cmake logic for dot product extension (llama/12721)
7a1e8f8

jeffbolznv commited on

vulkan: Implement split_k for coopmat2 flash attention. (llama/12627)
5ab06d6

jeffbolznv commited on

cmake: remove caching from vulkan coopmat checks (llama/12719)
fac18c1

bandoti commited on