Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
ZeroWw
/
Test
like
0
Text Generation
Transformers
Safetensors
GGUF
mistral
conversational
text-generation-inference
8-bit precision
bitsandbytes
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
f16a678
Test
136 GB
1 contributor
History:
18 commits
ZeroWw
Upload gemma-1.1-7b-it.f16.q5.gguf with huggingface_hub
f16a678
verified
over 1 year ago
.gitattributes
2.58 kB
Upload gemma-1.1-7b-it.f16.q5.gguf with huggingface_hub
over 1 year ago
Mistral-7B-Instruct-v0.3.f16.gguf
Safe
14.5 GB
xet
Upload Mistral-7B-Instruct-v0.3.f16.gguf with huggingface_hub
over 1 year ago
Mistral-7B-Instruct-v0.3.f16.q4.gguf
Safe
4.27 GB
xet
Upload Mistral-7B-Instruct-v0.3.f16.q4.gguf with huggingface_hub
over 1 year ago
Mistral-7B-Instruct-v0.3.f16.q4.q3.gguf
Safe
4.53 GB
xet
Upload Mistral-7B-Instruct-v0.3.f16.q4.q3.gguf with huggingface_hub
over 1 year ago
Mistral-7B-Instruct-v0.3.f16.q5.gguf
5.47 GB
xet
Upload Mistral-7B-Instruct-v0.3.f16.q5.gguf with huggingface_hub
over 1 year ago
Mistral-7B-Instruct-v0.3.f16.q6.gguf
Safe
6.26 GB
xet
Upload Mistral-7B-Instruct-v0.3.f16.q6.gguf with huggingface_hub
over 1 year ago
Mistral-7B-Instruct-v0.3.q8.q5.gguf
Safe
5.14 GB
xet
Upload Mistral-7B-Instruct-v0.3.q8.q5.gguf with huggingface_hub
over 1 year ago
Mistral-7B-Instruct-v0.3.q8.q6.gguf
Safe
5.95 GB
xet
Upload Mistral-7B-Instruct-v0.3.q8.q6.gguf with huggingface_hub
over 1 year ago
README.md
Safe
24 Bytes
initial commit
over 1 year ago
TextBase-7B-v0.1.f16.gguf
Safe
14.5 GB
xet
Upload TextBase-7B-v0.1.f16.gguf with huggingface_hub
over 1 year ago
TextBase-7B-v0.1.f16.q5.gguf
Safe
5.46 GB
xet
Upload TextBase-7B-v0.1.f16.q5.gguf with huggingface_hub
over 1 year ago
TextBase-7B-v0.1.f16.q6.gguf
Safe
6.25 GB
xet
Upload TextBase-7B-v0.1.f16.q6.gguf with huggingface_hub
over 1 year ago
TextBase-7B-v0.1.q8.gguf
Safe
7.7 GB
xet
Upload TextBase-7B-v0.1.q8.gguf with huggingface_hub
over 1 year ago
config.json
Safe
1.13 kB
Upload folder using huggingface_hub
over 1 year ago
gemma-1.1-7b-it.f16.q5.gguf
Safe
7.07 GB
xet
Upload gemma-1.1-7b-it.f16.q5.gguf with huggingface_hub
over 1 year ago
gemma-7b-it.f16.gguf
Safe
17.1 GB
xet
Upload gemma-7b-it.f16.gguf with huggingface_hub
over 1 year ago
gemma-7b-it.f16.q5.gguf
Safe
7.07 GB
xet
Upload gemma-7b-it.f16.q5.gguf with huggingface_hub
over 1 year ago
gemma-7b-it.f16.q6.gguf
Safe
7.94 GB
xet
Upload gemma-7b-it.f16.q6.gguf with huggingface_hub
over 1 year ago
gemma-7b-it.q8.gguf
Safe
9.08 GB
xet
Upload gemma-7b-it.q8.gguf with huggingface_hub
over 1 year ago
generation_config.json
Safe
111 Bytes
Upload folder using huggingface_hub
over 1 year ago
model-00001-of-00002.safetensors
Safe
4.95 GB
xet
Upload folder using huggingface_hub
over 1 year ago
model-00002-of-00002.safetensors
Safe
2.57 GB
xet
Upload folder using huggingface_hub
over 1 year ago
model.safetensors.index.json
Safe
61.2 kB
Upload folder using huggingface_hub
over 1 year ago
special_tokens_map.json
Safe
414 Bytes
Upload folder using huggingface_hub
over 1 year ago
tokenizer.json
Safe
1.96 MB
Upload folder using huggingface_hub
over 1 year ago
tokenizer.model
Safe
587 kB
xet
Upload folder using huggingface_hub
over 1 year ago
tokenizer_config.json
Safe
137 kB
Upload folder using huggingface_hub
over 1 year ago