whisper.cpp / ggml /src /ggml-cuda /template-instances /fattn-mma-f16-instance-ncols1_1-ncols2_16.cu
JohannesGaessler's picture
CUDA: FA support for Deepseek (Ampere or newer) (llama/13306)
507d30c
// This file has been autogenerated by generate_cu_files.py, do not edit manually.
#include "../fattn-mma-f16.cuh"
DECL_FATTN_MMA_F16_CASE(576, 512, 1, 16);