cnfusion/Mellum-4b-base-mlx-fp16
The Model cnfusion/Mellum-4b-base-mlx-fp16 was converted to MLX format from JetBrains/Mellum-4b-base using mlx-lm version 0.22.3.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("cnfusion/Mellum-4b-base-mlx-fp16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 11
Model size
4B params
Tensor type
F16
·
Hardware compatibility
Log In
to add your hardware
Quantized
Model tree for cnfusion/Mellum-4b-base-mlx-fp16
Base model
JetBrains/Mellum-4b-baseDatasets used to train cnfusion/Mellum-4b-base-mlx-fp16
Evaluation results
- EM on RepoBench 1.1 (Python)self-reported0.259
- EM ≤ 8k on RepoBench 1.1 (Python)self-reported0.280
- EM on RepoBench 1.1 (Python)self-reported0.282
- EM on RepoBench 1.1 (Python)self-reported0.280
- EM on RepoBench 1.1 (Python)self-reported0.278
- EM on RepoBench 1.1 (Python)self-reported0.245
- EM on RepoBench 1.1 (Python)self-reported0.211
- EM on RepoBench 1.1 (Java)self-reported0.286
- EM ≤ 8k on RepoBench 1.1 (Java)self-reported0.311
- EM on RepoBench 1.1 (Java)self-reported0.320
- EM on RepoBench 1.1 (Java)self-reported0.321
- EM on RepoBench 1.1 (Java)self-reported0.291
- EM on RepoBench 1.1 (Java)self-reported0.249
- EM on RepoBench 1.1 (Java)self-reported0.247
- pass@1 on SAFIMself-reported0.381
- pass@1 on SAFIMself-reported0.253
- pass@1 on SAFIMself-reported0.384
- pass@1 on SAFIMself-reported0.506
- pass@1 on HumanEval Infilling (Single-Line)self-reported0.662
- pass@1 on HumanEval Infilling (Single-Line)self-reported0.385