Configuration Parsing Warning:In adapter_config.json: "peft.base_model_name_or_path" must be a string
Qwen2.5-1.5B Fine-tuned for KPI Tool Calling
This model is fine-tuned from Qwen/Qwen2.5-1.5B-Instruct for manufacturing KPI tool calling.
Tools
- get_oee: Get OEE (Overall Equipment Effectiveness) metrics
- get_availability: Get availability/uptime metrics
Training Details
- Base Model: Qwen/Qwen2.5-1.5B-Instruct
- Fine-tuning Method: LoRA (r=32, alpha=64)
- Training Samples: 500
- Epochs: 4
- Learning Rate: 0.00015
- Dataset: bhaiyahnsingh45/kpi-tool-calling
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
import json
# Load model
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-1.5B-Instruct",
torch_dtype=torch.float16,
device_map="auto"
)
model = PeftModel.from_pretrained(base_model, "bhaiyahnsingh45/Qwen2.5-1.5B-Instruct-kpi-tool-calling")
tokenizer = AutoTokenizer.from_pretrained("bhaiyahnsingh45/Qwen2.5-1.5B-Instruct-kpi-tool-calling")
# Define tools schema
tools_json = '''
[
{
"type": "function",
"function": {
"name": "get_oee",
"description": "Get OEE (Overall Equipment Effectiveness) metrics",
"parameters": {
"type": "object",
"properties": {
"custom_start_date": {"type": "string", "description": "Start date (YYYY-MM-DD HH:MM:SS)"},
"custom_end_date": {"type": "string", "description": "End date (YYYY-MM-DD HH:MM:SS)"},
"machine": {"type": "string", "description": "Machine name"},
"line": {"type": "string", "description": "Production line"},
"plant": {"type": "string", "description": "Plant name"}
},
"required": ["custom_start_date", "custom_end_date"]
}
}
},
{
"type": "function",
"function": {
"name": "get_availability",
"description": "Get availability/uptime metrics",
"parameters": {
"type": "object",
"properties": {
"custom_start_date": {"type": "string", "description": "Start date (YYYY-MM-DD HH:MM:SS)"},
"custom_end_date": {"type": "string", "description": "End date (YYYY-MM-DD HH:MM:SS)"},
"machine": {"type": "string", "description": "Machine name"},
"line": {"type": "string", "description": "Production line"},
"plant": {"type": "string", "description": "Plant name"}
},
"required": ["custom_start_date", "custom_end_date"]
}
}
}
]
'''
tools = json.loads(tools_json)
# System prompt
system_prompt = "You are a function calling assistant for manufacturing KPI data. Respond ONLY with function calls."
# Example query
user_query = "Show me the OEE for LINE_A01 from January 1st to January 31st 2024"
# Format messages
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_query}
]
# Generate response
text = tokenizer.apply_chat_template(messages, tools=tools, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.1, do_sample=True)
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(response)
Example Queries and Expected Output
| Query | Tool Called | Key Arguments |
|---|---|---|
| "Get OEE for LINE_A01 from 2024-01-01 to 2024-01-31" | get_oee |
line: LINE_A01 |
| "Show availability for machine LINE_B02_FILLER_M01 last week" | get_availability |
machine: LINE_B02_FILLER_M01 |
| "Compare OEE between LINE_A01 and LINE_B02" | get_oee (2 calls) |
Different line values |
| "Get both OEE and availability for Plant_Austin" | get_oee + get_availability |
plant: Plant_Austin |
Evaluation Results
- Correct: 15/50 (30.0%)
- Partially Correct: 28/50 (56.0%)
- Incorrect: 7/50 (14.0%)
- Downloads last month
- 5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support