Implement GGML_CPU_ALL_VARIANTS for PowerPC (llama/14286) 0bcd751 Christian Kastner Diego Devesa commited on Jun 20, 2025
ggml-cpu : remove unnecesary arm feature detection (llama/14281) 62cf694 Diego Devesa commited on Jun 19, 2025
llamafile : support s390x SIMD instruction set (llama/14273) 26bafb6 taronaeo commited on Jun 19, 2025
ggml-cpu: fix uncaught underscore terminators (llama/14023) c005248 taronaeo commited on Jun 18, 2025
ggml: Add Apple support for GGML_CPU_ALL_VARIANTS (llama/14258) 9d1d21b Charles Xu commited on Jun 18, 2025
ggml: Add Android support for GGML_CPU_ALL_VARIANTS (llama/14206) 7ddd89c Charles Xu commited on Jun 16, 2025
Implement GGML_CPU_ALL_VARIANTS for ARM (llama/14080) c9cec9d Christian Kastner commited on Jun 11, 2025
ggml-cpu : split arch-specific implementations (llama/13892) 8c833e9 xctan ggerganov commited on Jun 9, 2025
releases : use dl backend for linux release, remove arm64 linux release (llama/13996) 9896625 Diego Devesa commited on Jun 4, 2025
cmake : Handle mixed-case 'Power' strings in POWER CPU detection (llama/13966) bc1415b shalinib root commited on Jun 2, 2025
threading: support for GGML_SCHED_PRIO_LOW, update thread info on Windows to avoid throttling (llama/12995) d5d55f2 Max Krasnyansky Diego Devesa commited on May 31, 2025
cmake: Factor out CPU architecture detection (llama/13883) b436dcc Christian Kastner commited on May 29, 2025
ggml: aarch64: Implement SVE F32 kernels for Mamba Sequential Scan Algorithm (llama/13882) bfc960a Vineel Abhinav ggerganov commited on May 29, 2025
ggml: aarch64: Implement SVE F32 kernels for vector functions (llama/13843) 7941e9b Vineel Abhinav commited on May 29, 2025
ggml-cpu: x86 feature detection is specific to x86 (llama/13811) d86ba47 Christian Kastner commited on May 27, 2025
ggml-cpu : set openmp wait time if not set (llama/13758) 276d920 Diego Devesa commited on May 24, 2025
ggml-cpu: Update KleidiAI to v1.6 and fix include directives (llama/13509) 7463545 Dan Johansson commited on May 13, 2025
ggml-cpu: Integrate fp32=bf16xbf16 SME KleidiAI kernel (llama/13053) 0612f1f Dan Johansson Charles Xu commited on May 12, 2025
rpc : use backend registry, support dl backends (llama/13304) 0286805 Diego Devesa commited on May 4, 2025
ggml: move fp16/bf16 conversion optimizations to CPU backend + export conversion APIs (llama/13107) c47823e sxx-404 commited on Apr 26, 2025
ggml : add SSE 4.2 and x64 base variant for CPUs without AVX (llama/12871) f8795d3 Diego Devesa commited on Apr 21, 2025
ggml : Add AVX512 implementation of GEMM - Q4_Kx8 (llama/12829) 2457b99 Srihari-mcw commited on Apr 15, 2025
ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register (llama/12773) acb674d sxx-404 commited on Apr 14, 2025
ggml: fix compilation error s390x (llama/12848) 2458d68 Aaron Teo Aleksei Nikiforov commited on Apr 11, 2025
cpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running test-backend-ops with only the CPU backend (ggml/1190) ee7706c cmdr2 commited on Apr 11, 2025
ggml-cpu-impl.h: do not redefine bool on POWER9 (llama/12856) bb47d22 Piotr Kubaj commited on Apr 9, 2025
llama : fix FA when KV cache is not used (i.e. embeddings) (llama/12825) e7cb2dc ggerganov commited on Apr 8, 2025
ggml : add more generic custom op, remove deprecated custom ops (ggml/1183) ba7a5f8 Diego Devesa commited on Apr 9, 2025