my personal llm archive, to have it even if vendor will remove it
Dmitrii Kostakov PRO
kostakoff
AI & ML interests
MLOps
Recent Activity
posted
an
update
about 13 hours ago
I found it very funny that the Hugging Face profile has a specific section where we can share our hardware.
It really brings back memories of the good old days when we used to flex our custom PC specs on enthusiast forums 20 years ago! That inspired me to fill out my own profile and share it here.
And this is my first set of GPUs that I am using to learn MLOps:
- RTX 3090 – the best one; unfortunately it doesn't support the latest FP8 and FP4, but it’s still very powerful.
- Tesla V100 – performance is almost like the RTX 3090, just much older.
- Tesla P100 – old, and doesn't have tensor cores, but still can handle small models.
- Radeon MI50 – old, similar to the P100, but uses ROCm instead of CUDA, which is actually a pretty good experience to setup.
- GTX 1080 Ti – mostly useless, no FP16 support.
- GTX 1660 – first generation of the Turing architecture, but mostly useless.
https://huggingface.co/llmlaba
upvoted
an
article
about 14 hours ago
GGML and llama.cpp join HF to ensure the long-term progress of Local AI
updated
a collection
1 day ago
favorite