lfm2-1.2b-sermon-instruct-qlora

Author: Delight Aheebwa
Contact: Please contact via Hugging Face or GitHub profile (delight2004)

Model Overview

  • Type: Causal Language Model (LM)
  • Base Model: LiquidAI/LFM2-1.2B
  • Fine-tuning technique: QLoRA with PEFT (LoRA adapters)
  • Language(s): English only
  • Intended Use: Research, educational, and sermon content generation on Christian and theological topics (especially inspired by John Piper's teachings).
  • Tags: Uganda, theology, Christianity
  • License: OpenRAIL Non-Commercial Variant

Dataset & Training

  • Data source: Transcripts of YouTube sermons by John Piper (excluding "Ask Pastor John" podcast transcripts)
  • Filtered dataset size: 165 entries after filtering (~10% set aside for validation)
  • Preprocessing: Splitting and curation as detailed in the training notebook
  • Training details:
    • Hardware: Google Colab free tier (T4 GPU)
    • epochs: 4
    • batch size: 1 (gradient_accumulation_steps=4)
    • learning rate: 2e-5
    • sequence length: 512
    • quantization: 4-bit (bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16)
    • Only LoRA adapter params were trained (~0.05% of total)
    • Full trainer/config code: see Colab notebook above

Evaluation

  • No formal evaluation/benchmarking was conducted. Use at your own discretion โ€“ feedback and community tests are welcome.

Limitations & Disclaimer

  • Not intended for production or commercial use.
  • Outputs should not be treated as official theological advice.
  • Possible biases and limitations inherited from the dataset/model base โ€“ may reflect the original preacher's views.
  • Model may hallucinate or generate plausible but incorrect theological claims or references.

Technical

  • Architecture: Causal Transformer (1.2B params, LiquidAI flavor)
  • Adapter config: PEFT/QLoRA
  • Training framework: Hugging Face Transformers, TRL, PEFT, bitsandbytes, PyTorch
  • Compute: Google Colab T4 (free tier, single GPU)
  • Notebook: john_piper.ipynb

Citation

If you use this model, please cite it or reference its Hugging Face page, and acknowledge John Piper's YouTube sermons as the data source.

Downloads last month
6
Safetensors
Model size
1B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for delight2004/lfm2-1.2b-sermon-instruct-qlora

Base model

LiquidAI/LFM2-1.2B
Finetuned
(44)
this model