QUEST-4B

QUEST 4B SFT model — general-purpose deep research agent (Qwen3.5 family, dense).

Benchmark results

Benchmark Metric Score
BrowseComp avg@3 40.0
Mind2Web 2 avg@3 24.3
HLE avg@3 36.2
DeepResearch Bench avg@3 22.0
BrowseComp-Plus avg@3 52.1
WideSearch Item F1 avg@4 55.0
GAIA avg@3 77.7
LiveResearchBench avg@3 62.1

Quick start

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "osunlp/QUEST-4B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id, device_map="auto", torch_dtype="auto",
)

Apply the model's chat template with tokenizer.apply_chat_template(...) before passing prompts.

License

Released under the Apache License 2.0.

Downloads last month
81
Safetensors
Model size
5B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for osunlp/QUEST-4B

Quantizations
2 models

Collection including osunlp/QUEST-4B