examples : support progress_callback API for addon.node (#2941) 3f6a806 unverified Lin Xiaodong linxiaodong commited on Mar 28, 2025
ggml : sync/merge cmake,riscv,powerpc, add common.cmake (ggml/0) f695cbf ggerganov commited on Mar 27, 2025
llamafile : ppc64le MMA implementation for Q4_0. (llama/12489) d154905 amritahs-ibm commited on Mar 27, 2025
SYCL: implement memset ggml backend buffer interface (llama/12580) 3f95f2b Akarshan Biswas commited on Mar 27, 2025
SYCL: disable Q4_0 reorder optimization (llama/12560) 33f8316 Akarshan Biswas commited on Mar 25, 2025
opencl: simplify kernel embedding logic in cmakefile (llama/12503) 5f131ac lhez Max Krasnyansky commited on Mar 24, 2025
vulkan: fix mul_mat_vec failure in backend tests (llama/12529) 09dd86a jeffbolznv commited on Mar 24, 2025
vulkan: Optimize mul_mat_vec p021 and nc shaders (llama/12505) 6868981 jeffbolznv commited on Mar 22, 2025
Vulkan: RTE rounding for cpy to quant (llama/12480) 8707beb stduhpf jeffbolznv commited on Mar 21, 2025
vulkan: workaround for AMD Windows driver 16 bit unpack8 bug (llama/12472) 417a5d6 Eve commited on Mar 21, 2025
Fix build on Windows when ccache enabled (ggml/9954) (llama/9976) bbd0292 蕭澧邦 Romain Biessy commited on Mar 21, 2025
ggml : block interleaving support for Q4_K quantization for x86 AVX2 architecture (llama/12332) 0729506 Srihari-mcw commited on Mar 20, 2025
CUDA: Improve flash decoding kernel GPU occupancy for BS=1 case (llama/12183) 3a7ca19 Gaurav Garg JohannesGaessler commited on Mar 19, 2025
vulkan: optimize iq1 coopmat2 dequant functions (llama/12427) 53dd8ad jeffbolznv commited on Mar 19, 2025
Fix visionOS build and add CI (llama/12415) ecb4322 guusw Giovanni Petrantoni commited on Mar 19, 2025
vulkan: Submit once enough matmul work has been recorded (llama/12406) ec77b2c jeffbolznv commited on Mar 19, 2025
musa: override warp_size of musa device to 32 (llama/12445) 184c152 R0CKSTAR commited on Mar 18, 2025
SYCL: using graphs is configurable by environment variable and compile option (llama/12371) c18969f Łukasz Ślusarczyk Romain Biessy commited on Mar 18, 2025
Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (llama/12434) 55088d3 OccamRazor commited on Mar 18, 2025
fixed compilation warnings in ggml-sycl (llama/12424) 77ff985 Łukasz Ślusarczyk commited on Mar 18, 2025
cuda : enable CUDA Graph on CUDA Toolkit < 12.x (llama/12394) 1e69b8c Gaurav Garg commited on Mar 17, 2025
vulkan: Add N/2 and N/4 optimized paths in coopmat2 shader (llama/12312) c9f86c1 jeffbolznv commited on Mar 17, 2025
vulkan: use fp32 in coopmat2 q4_k dequant function (llama/12309) 9ca84c6 jeffbolznv commited on Mar 17, 2025
vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking (llama/12273) 5d51f1c jeffbolznv commited on Mar 17, 2025
vulkan: Adjust coopmat2 tile sizes and selection heuristic (llama/12258) 3cc6539 jeffbolznv commited on Mar 17, 2025
cmake : enable building llama.cpp using system libggml (llama/12321) 6da01d6 Christian Kastner commited on Mar 17, 2025
SYCL: set extras only on GGML_TYPE_Q4_0 (llama/12366) 6f03947 Akarshan Biswas commited on Mar 17, 2025
SYCL : support non-contiguous tensors in binary ops (add, sub, etc) (llama/12399) 2d7a940 fairydreaming sszymczyk commited on Mar 15, 2025
sycl : variable sg_size support for mmvq kernels (llama/12336) 83e6f74 Alberto Cabrera Pérez commited on Mar 12, 2025
CUDA/HIP: Fix fattn-vec-* when device warp size is not 32 (llama/12315) 2adc060 uvos commited on Mar 12, 2025
CUDA/HIP: refractor mmqv to unify the calculation of nwarps and rows per block between host and device code. (llama/12177) 1f75790 uvos JohannesGaessler commited on Mar 11, 2025