YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

LFM2.5-1.2B-Thinking

Model Description

LFM2.5-1.2B-Thinking is a ~1.17B-parameter “thinking” (reasoning-tuned) language model from Liquid AI’s LFM2.5 family, designed for efficient deployment (including on-device/edge scenarios).

It supports long-context usage (up to 32,768 tokens) and is trained/tuned with a focus on instruction following and reasoning-oriented behavior.

Quickstart

Follow the deployment instruction on this page to run with one line of code: https://sdk.nexa.ai/model/LFM2.5-1.2B-Thinking

Features

  • Reasoning-oriented: tuned for stronger step-by-step problem solving vs. base variants.
  • Conversational AI: context-aware dialogue using a chat template format.
  • Tool / function calling: supports tool-use patterns for agentic workflows.
  • Long context: supports up to 32K context length.
  • Multilingual: supports multiple languages (including English and several major world languages).

Use Cases

  • On-device assistants and private “local-first” chat experiences
  • Tool-using agents (structured actions via function calls)
  • Document Q&A and summarization (especially when paired with retrieval)
  • Structured extraction and classification tasks

Inputs and Outputs

Input:

  • Text prompts or conversation history, typically formatted using the model’s chat template.

Output:

  • Generated text (answers, explanations, reasoning responses).
  • Optional structured tool calls when prompted for tool-use behavior.

License

This repo is licensed under the Creative Commons Attribution–NonCommercial 4.0 (CC BY-NC 4.0) license, which allows use, sharing, and modification only for non-commercial purposes with proper attribution. All NPU-related models, runtimes, and code in this project are protected under this non-commercial license and cannot be used in any commercial or revenue-generating applications. Commercial licensing or enterprise usage requires a separate agreement. For inquiries, please contact [email protected]

References (other relevant links)

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for NexaAI/LFM2.5-1.2B-thinking-npu