π₯ Insurance Expert - Llama 3.2-3B LoRA
This model is a fine-tuned version of meta-llama/Llama-3.2-3B using LoRA (Low-Rank Adaptation) specialized for insurance domain expertise.
π― Model Description
- Base Model: Llama 3.2-3B (3.26B parameters)
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Training Dataset: insuranceQA-v2 (21,325 Q&A pairs)
- Domain: Insurance and Financial Services
- Language: English
(... rest of your content)
π Quick Start
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load base model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B")
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.2-3B",
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "Abdullah-abushammala/insurance-expert-llama-3b-lora")
model.eval()
def ask_insurance_expert(question):
prompt = f"Question: {question}\\nAnswer:"
inputs = tokenizer(prompt, return_tensors='pt', padding=True)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_length=120,
temperature=0.4,
do_sample=True,
top_p=0.8,
repetition_penalty=1.3,
no_repeat_ngram_size=3,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response.split("Answer:", 1)[1].strip()
# Example usage
answer = ask_insurance_expert("What is a deductible in health insurance?")
print(answer)
Model tree for Abdullah-abushammala/insurance-expert-llama-3b-lora
Base model
meta-llama/Llama-3.2-3B